I think you will need to implement an incremental load-approach maybe on multiple csv-slices for example on a daily level or per odbc which would mean a massive reducing of the load-times: Advanced topics for creating a qlik datamodel (last two link-blocks in it).
Further I would check if really all 200+ attributes/columns are necessary within a single application or if it could be logically splitted and of course removing all fields which are not useful for an user like record-id's: Search Recipes | Qlikview Cookbook.
The next step will be to look on the number of distinct field-values: The Importance Of Being Distinct.
Of course the datamodel itself will be quite important by larger datasets and should be rather a star-scheme or even a big flat-table. All heavy calculations should be rather implemented within the script so that the ui-expressions could be build with simple sum/count/avg-expressions by avoiding (nested) if-loops, aggr- and interrecord-functions.
I could imagine that in the end from 7 GB of rawdata will remain about 10% within the qvf (without any splitting and/or document-chaining - it's rather the worst-case if the amount of data could really not handled within a single application) and the user-experience regarding to the performance will be quite good.