2017 was a very busy year for me with getting engaged in Gatineau, QC followed by tying the not mid-December. At work I have been working on a mobile application and want to now do this as a mobile application, platform of choice being iOS/Apple, due to what seems to be a more intuitive development process. This will also allow me to explore, learn and integrate features to give a better end-user experience within the app.
One features in this vein would be when listing out the quilts they are tracking load the items (mainly the featured image, thinking will be the one labeled ‘top’ or an OTB image for in progress projects) as they come into view and have a local cache to preserve bandwidth, perhaps detecting WiFi vs data.
Below is the current schema I am thinking. I created the tables in DynamoDB as implementing my app using Amazon Web Services as the back-end.
Key aspects of this schema that will hopefully allow me to make this application work are as follows:
- Based on DynamoDB/NoSQL impelentation UserID and CreatedOn date down to the ms will act as the primary key (Partition and Sort key respectively)
- When a table needs to have a reference to an item in another table it will store the UserId and CreatedOn for that record in the table in the following format: UserId#CreatedOn
- This will hopefully allow me to parse and retrieve the item when necessary from the other table
- Tables marked as Public anyone can add items to as we do not need to duplicate authors, etc. and the app will only allow editing of certain properties (i.e. in Author would only be able to edit DOD)
- Tables marked Protected are for allowing other users to view the entries and comment (i.e. on quilts project or specific images or patterns)
- DateFinished for a book will allow options like: In Progress, Unknown, or an actual date formatted MM/DD/YYYY
- May follow similar format for CompletedOn with a Quilt
- Default to ‘In Progress’
Update January 1, 2018:
Well, I’ve discovered one thing. This implementation that somewhat parallels a relational database to reference related items in another table takes a lot of read/write capacity units (a measure used by AWS to calculate costs) and the free tier only provides 25 which according to them is more than enough for most applications. Check out the FAQ.
This leads me to believe I need to rewrite the storage structure to store the bulk of the information in a JSON object in a column of type Map.
Thus, based on this from the AWS documentation above I need to figure out how much storage each item would be as a JSON object:
A unit of Write Capacity enables you to perform one write per second for items of up to 1KB in size. Similarly, a unit of Read Capacity enables you to perform one strongly consistent read per second (or two eventually consistent reads per second) of items of up to 4KB in size. Larger items will require more capacity. You can calculate the number of units of read and write capacity you need by estimating the number of reads or writes you need to do per second and multiplying by the size of your items (rounded up to the nearest KB).
Now, I may just be over complicating things but this will hopefully eliminate the need for so many tables, perhaps a column to indicate object type stored as JSON which will be indexed (increases capacity units per table by the way).
To be investigated further at a later date!