We have developed an application which runs on the mainframe (z/OS), and it uses CAF, the Call Attach Facility, to talk to DB2/z for storing its data.
Those customers which already have DB2/z (and hence have to pay for it regardless) are not concerned, but there are others who want to use our application without incurring the expense of the database as well.
They have expressed a desire to have our product not use DB2/z, due to the expense. Under z/OS, the licence fees for DB2 are rather high and our application doesn't really need the insane levels of reliability that it provides.
So what they'd like us to do is to run DB2 under either zLinux (SLES/RHEL), or DB2/LUW on a machine totally separate from the mainframe. Or even, though this will probably be harder, in a non-IBM database.
We're looking for a hopefully-minimal-change solution to our code in achieving this. DB2 has all its federated stuff which will allow a program using DB2/z to seamlessly access data on an instance running elsewhere, but this still requires DB2/z and hence won't result in a cost reduction.
What would be the easiest way to shift all the data off the mainframe and allow us to remove the DB2/z dependency completely from our application?
Building on @NealB's answer, another way to create the layers would be to have no SQL in your application layer, but to call subroutines to accomplish your I/O. You indicate you would be willing to create custom builds, so you could create a set of routines for commonly-asked-for persistence layers.
Call the "database connect" module, which for DB2 on z/OS would do the CAF calls, for DB2 on z/Linux would (say) establish an SSL connection to the DBMS. Maintain a structure in memory with a union of pointers to the necessary data structures to communicate with your DBMS of choice.
FWIW I've seen vendor code that does this, allowing the business logic to be independent of the DBMS implementation. Some shops use VSAM, others DB2, other IMS. The data model is messy, but, sometimes them's the breaks.
This isn't an answer, just a couple of ideas and observations.
One approach I can think of would be to tier your application into an I/O layer and an
application layer. The application would run on Z/os and the I/O layer would run on
whatever machine hosts the database. All data access would then be via remote procedure calls
over TCP/IP or UDP. This would be a lot of work to set up and configure. Worse yet it may only be
appropriate for read-only type operations because managing transaction ACID (Atomicity, Consistency, Isolation, Durability)
properties becomes a real nightmare in the face of update operations.
As cschneid pointed out, you could try "rolling your own" database management system using
open source; but that too would probably lead to more problems than it solves.
I think your observation about "pushing a big rock uphill" sums it up.