Had to rework the Model class a bit, there's some weirdness happening and I'm
unsure if it's part of the rewrite or always been busted. Won't really know for
sure until I start porting sites over to it I suppose.
Not being used and the logic is pretty damn hacky. I don't believe in JOINs so
I'm unsure if this support will be re-added in the future or if there will
simply be a backed in opinion that JOINs are the devil.
Never gets used, ended up being somewhat MySQL specific as PostgreSQL favors
letting the server handle it instead of hinting at it. Write better queries I
suppose?
In an effort to only maintain compatibility with the latest version of PHP (currently the 5.5 branch) I dropped the sanity checks if `json_encode` was available as it is always available in PHP 5.2+. Dropping this sanity check also allowed me to remove the wrapper function and the `JSON_AVAILABLE` constant. Ideally I'd like to move towards dropping the `Convert` class entirely but will need a way to convert an array to XML as the `RSS` class still leverages it. One thought is to move that code right into the `RSS` class as it never gets used elsewhere because XML is gross.
Primary key queries should not be cached because the record may not exist today but could exist in the future and because the INSERT logic in PICKLES doesn't do any invalidation of the cache.
If you tried to use the extended array syntax to query against the primary key (id) and passed it an integer instead of an array, an error was occurring. Thanks @geoffoliver !
Pass a Model an optional second or third parameter to force the Model to check the cache before running the query and stashing the results in the key for future queries.
Caching was limited to single row selects against the primary key. Expanded to include cache checking and storing when selecting multiple values against the primary key. Does not work on any other row due to the fact that some rows may have duplicate values and the mapping could get borked pretty quick.
If you're pulling data against a single column and returning a single column the UID will be cached out to a key that can easily be recalled the next time the same query is executed. On UPDATE and DELETE the corresponding keys are deleted.
Sure beats incrementing the variable then setting it! Usage: $model->record['column'] = '++'; // or '--'. Only works with single row updates at the moment. May expand into allowing a variable step to be defined (+10 or ++10 or something... I say that cause I think the double character is a bit safer since you could in theory set a value to -10000 and not want it to decrement by 10k)
Probably should make it a part of the model as well, assuming it doesn't already do that, pretty sure it doesn't. Would help me as one of my sites I need to migrate a ton of code, so being able to flip models on one by one would be excellent.
Simplified logic by only checking one variable. Since "1" is a string but also returns true for being an integer, I swapped the is_string out for an !is_int
Previously new Model(array(1, 2, 3)); would results in a query like SELECT * FROM table WHERE 1 AND 2 AND 3; which would typically result in an out of memory error depending on the number of rows in the table (as all would be returned). Added detection for an array of integers and forces that to be considered new Model(array('id' => array(1, 2, 3))). As I type this I think I need to go back and make an additional change.
Selects done against a primary key will automatically cache to Memcached (haven't tried it, but it should fail gracefully) indexed by the model name and the primary key ([NAMESPACE-]MODEL-PKEY). Any updates or deletes against the same primary key will purge the cache automatically. The major caveat here is the case of mass updates which would result in stale data. As it stands the data is being cached for a mere 5 minutes, so this multiple row update scenario would be short lived but ideally, I'll be pushing back the time to live on the cache and/or making it something that's configurable. If you have to do mass updates, you're probably doing them with a cronjob and should just be flushing all of the cache in that scenario (as it would be nearly impossible to detect the affected keys and purge them all).
Also obliterated the getters and setters in the Database class after running some tests against their speed in comparison to getting and setting the variables directly
Automatically inject the creation, update, and delete timestamps as well as which user performed the action. Rows can now be logically deleted and there are no more named parameters just question mark syntax.
The ID variable was used to map the table's UID so the model could inject it in properly. Added a new variable named columns that is an array of the key columns. Currently contains ID, Created at and Updated at columns. The timestamp columns will soon be injected into the queries and if the value is set to false, will skip it.
There wasn't much to drop as it was never fully integrated. Unfortunately the only things that end up being fully integrated are the things that I actually use. Maybe someday MongoDB, maybe someday.
Expanded Model class to support queries with priorities as well as the ignore syntax. Priority can be set to LOW or HIGH and will be added to the appropriate queries with the appended _PRIORITY syntax. Ignore is boolean like the Delayed variable.