I'm assessing how much to scale my hardware by using a small deployment as a benchmark. I read from "DS-Technical-Brief-QlikView-Architecture-and-System-Resource-Usage-EN.pdf" and "BI-WP-QlikView-Scalability-Overview-EN.pdf" that performance is linearly proportional to a combination of RAM and CPU capacity.
I have some doubts and would appreciate if someone could explain to me.
1) Currently, my small deployment is about 10MB and it takes 22 sec for server to refresh data. Aggregation of data or selection is instantaneous. My objective is to find out what the performance is like if I increase the source data (and thus QVWsize?).
2) Assuming the worst case that I have:
QVWsize = 500MB
No. of concurrent users = 25
FileSizeMultipler = I am going to use 10 here since pdf states it's ranging from 2-10.
Question 1: In fact, how do I decide what multiplier to use?
userRAMratio = I use 10%
Going by the formula as attached, the RAM i require will be 17.5GB.
Question 2: As long as I have much more than 17.5GB in my server, the memory should be sufficient to achieve the same performance of 22secs right?
3) I have not gotten to this step. But I know I need to measure the CPU utilization of my small deployment.
Question 3:After which, I will need to scale by (500MB/10MB) to calculate the CPU required in my production setup in order to keep the same performance as my small deployment?
4) I saw this figure in your document:
Question 4a: I assume sessions is equivalent to number of open QVW file?
Question 4b:What is selection per session and how do I measure it?
Question 4c: This response time is referring to when users choose a new selection or when the server is refreshing data?
I know I am asking a lot of questions here. Hopefully someone will be kind enough to explain the above.