I will use the following pseudo SQL as an example: SELECT a,b,c,d,e FROM table1 INNER JOIN table2 LEFT JOIN table3 LEFT JOIN table4 WHERE table1.criteria = 'something'. The set from table1 will select about 50,000 rows (records) from a 12,000,000 row table. The table2 table will have 1 to 7 corresponding table1 records. Tables table3 and table4 are (for simplicity sake) optional (i.e. 0..1).
You have a data class for table1. The data object class has load and save methods to select and update data using SQL by unique id. I assume you would have to add a method to the data class to provide a way to return a collection of a) unique ids or b) objects by non-unique key selection criteria (hopefully an indexed column). If you have a unique id then you must create a new object and load it by ID (which would have horrible performance going one record at a time?). Then you would load the corresponding table2 object. What about the two optional tables? Would the class for the optional tables know to return a null or default value if the load method does not find a corresponding record? Back to table1; would you load all columns as part of the standard object load method even if you need maybe ten percent of the record?
It just seems so inefficient.
Would you create a completely separate "view" class for every single optimized sequence of SQL that would return a result set? In other words, try to create a reusable cursor that would map to a linear class.
And now the big question: how do accomplish this when the ERP system maps their file objects to three or (and usually) more tables in a positional way that cannot be materialized because of time elements?
I am so f'ing glad it's Friday.