Personal data aggregator. Very much a work in progress, with no guarantee of being finished.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Devin Dooley 98d5456b73 Add location permission request 1 year ago
api Switch to pgxpool for concurrent database connections 1 year ago
clients Add location permission request 1 year ago
config Add user settings feature to web client and add the supporting API functionality 1 year ago
db Switch to pgxpool for concurrent database connections 1 year ago
.air.conf Store home latitude and longitude in database 1 year ago
.gitignore Store home latitude and longitude in database 1 year ago
Makefile Store home latitude and longitude in database 1 year ago
README.md Store home latitude and longitude in database 1 year ago
go.mod Store home latitude and longitude in database 1 year ago
go.sum Switch to pgxpool for concurrent database connections 1 year ago
me.go Switch to pgxpool for concurrent database connections 1 year ago

README.md

Me

Me is a personal data aggregator.

In fact, it's so personal that it really will only work for me, it's creator (and even for me, it doesn't do nearly as much as I would like).

This has included on-and-off work over a couple iterations. My inspiration was to work against my terrible memory by answering to the best of my ability: "What the hell was I doing?" for some given time frame.

Architecturally, this is a Golang/Gin server, a ReactJS web client, and a React Native client, all wrapped into one repository (kids these days seem to call it a monorepo). The server is designed to communicate with a Postgres database, and for most client views will simply act as a proxy server to Plaid, Dark Sky, CTA Train Tracker, or whatever else ends up getting integrated. The intention is to only store in the database what cannot be retrieved from someone else's API for free, because I am cheap and my homelab has too few 9s of availability to justify anything more involved.

At the time of writing, MVP is still not complete, but will include:

  • Passive location logging via the mobile client
  • Dynamic maps of location for ranges of time
  • Tables of bank account balances and recent transactions
  • Statuses of CTA trains near your home and work, or by station number
  • Weather forecasts for your location or based on last logged GPS coordinate

Last time this README was updated, I had an API that proxies requests to those services, and a web client that supports logging in and viewing account balances. We'll see how much more I get before my focus gets drawn elsewhere.

Development Dependencies

Configuration

Plaid

This application is dependent on Plaid's API to retrive account information from financial institutions. Because this is intended to be used for a single user, it is assumed that this is deployed using their development API keys.

Keys for client_id, public_key and secret_key can be obtained from Plaid's website after registration. Rather than tie this into their Plaid Link code, I've chosen to clone their quickstart repository and retrieve an access_token by running the Plaid Link code locally to retrieve the access_token. These tokens never expire, so this is a one-time process.

Dark Sky

Dark Sky is used to gather weather information and forecasts. You will need to get a Dark Sky API secret key by registering for a free account on their website.

Configuration File

A configuration file needs to be created and stored in the main directory as .config.yml. The file should have the following format:

me:
  port: ":PORT_TO_SERVE_ON"
  environment: BUILD_ENVIRONMENT
  web_dir: ./web/public

cta:
  secret_key: CTA_KEY
  home_station: 12345
  work_station: 12345

dark_sky:
  secret_key: DARK_SKY_SECRET_KEY

db:
  address: ADDRESS_OF_POSTGRES_DB

plaid:
  client_id: PLAID_CLIENT_ID 
  public_key: PLAID_PUBLIC_KEY 
  secret_key: PLAID_SECRET_KEY 
  environment: development
  version: 2019-05-29

Managing Database Schema

The general idea for managing the database schema is to ignore all best practices I am aware of, as I have found it extremely tedious or messy in every instance I've come across.

Instead of storing schema patches, my intention is to directly apply updates to the schema from the psql command line interface during development, and commit the updated schema to the repository via pg_dump -s. Progressive updates will be applied to the server's database upon deploy, regressive updates will be managed by diffing schema versions via a tool akin to migra, then patched manually.

For now, this is all just my idea of how it will work. It would obviously never scale well, but for a single developer project I'm excited to see how this workflow holds up.