- connection and global
- prepare statements and bind variables
- parse, execute and fetch
Everything starts with a controller script; the code that initializes global entities, defines functions and classes, and handles session control (probable through one or more require_once() included files) before dispatching requests to other handlers. This "framework" assumes read-only querying of data using one or more query executions.
Connection and Global
Database connection code can be moved into a separate include_once() file. This file would set connection and schema variables as global variables, make the connection, and handle errors for the connection. I create empty named (associative) arrays for bind variables and result sets and make both global.
Prepare Statements and Bind Variables
The controller script would set up bind variable values in the global named array. Since a single controller can fetch numerous data models and render multiple views they are built into a named array. This named array will contain the script name (also used as a value in the result set named array), the SQL statement, and a comma delimited list of variables that need bound. Data models can be split into separate include_once() files if necessary.
Parse, Execute and Fetch
For each item in the named array created in the last step, execute an oci_parse() followed by oci_bind_by_name() for each bind variable. Once parsed and bound, oci_execute() and if successful, oci_fetch_all() into the named result set array. Since this block of code would be reused by numerous controllers (like the connection code), move it into a separate include_once() file.
At this point we assume all SQL scripts have executed successfully and all resulting rows have been fetched into the result set named array. At this point the controller can delegate processing to a main view script (template) and additional view rendering script. This code is independent of database work but needs to be mentioned.
Some might notice there are no "objects" in use here (yet). The named arrays could be considered "cheating" around use of objects. Theoretically I could make each "query" or "request" an object; create an array of objects instead of a named array inside of an array. Then again, I could create a "transaction" class that would handle the multiple instances of the requests. Or should I create a "database" class, partner it with the "transaction" class that contains multiple "request" instances?
This is one of the things that drives me nuts about object oriented programming. As a "legacy" computer geek I grew up programming linear BASIC and machine code on a Commodore 64. I moved on to other various assembly code on different processors before graduating to "legacy" coding in COBOL and mainframe assembler. My life was linear but highly modular code. There are more times than often I wonder to myself what makes object oriented programming more efficient or better than "legacy" coding. Objects seem to have their place where an object can be strictly defined as a complete and independent unit. A text box is a good example of an object; it has specifically defined properties and methods, it's code can be reused and extended - it makes sense to me. When it comes to data objects I start wondering "why".
Using a "student" as an example for a data object; does the "student" object contain numerous objects like a "student ID" object (that contains presentation and validation) or is the "student ID" a property even though a lot of it's code can be used elsewhere beyond a normal "string". Is a student an extension of a base object of "person"; student and faculty have common attributes. Is the data object simply a representation of a table or can it be more than on table with one to one and/or one to many relationships? If it simply a representation of a table then isn't retrieval of a large set of objects horribly inefficient (say for example you need 1000 instances of object x that relate to one or more instances of object y; do you retrieve the set of 1,000 instances then make 1,000 additional requests for related data?).
There are probably a number of people that would disagree with my hard-coding of SQL in my script (or separate file). I have seen data dictionary models built within PHP that are used to dynamically generate SQL. I dislike this approach. Having worked with SQL on all of the major platforms (Oracle, SQL Server, DB2) and other platforms (MySQL, Datacom DB) I have learned there are enough differences in each SQL to make the hand-coding (and tuning) of code worth the time and trouble. In some cases (with an ERP, for example) you do not control table design and relationships. If I know a table function written in PL/SQL combined with a few optimizer hints will increase throughput by four hundred percent do I take that knowledge into account? Or do I keep the SQL template'd and generic and take the performance hit at the cost of hardware upgrades? (I've seen CASE generated SQL and when compared to hand-tuned SQL you exponentially lose response; which impacts the entire system).
I'm realizing I could go on all day and should probably wrap this up. More thoughts later, perhaps.