bulkem: Results
18 Jun 2015

Time to fit a single dataset

To test performance across varying dataset sizes, we sample from a two-component inverse Gaussian mixture model with known parameters. Only a single dataset is fit.

Dataset sizeCPU time (seconds)GPU time (seconds)GPU speedup
1000.006200.015760.39
1,0000.060320.015723.84
10,0000.678760.0392417.30
100,0006.350480.1974032.17
1,000,00067.988681.8795236.17

On the test hardware, we see that the GPU is slower for small dataset sizes (100 samples) but outperforms the CPU for larger dataset sizes. For datasets with 1 million samples, the GPU runs around 36 times faster than the CPU.

Time to fit many datasets

In this case, the dataset size is held constant (2000 samples) and we fit many datasets simultaneously, generating them in the same way as for the single dataset case.

Number of datasetsCPU time (seconds)GPU time (seconds)GPU speedup
10.104520.020925.00
101.120480.0536420.89
10010.127880.3590428.21
1000102.408403.4203629.94

We see similar results – the ratio of GPU-CPU performance increases as the number of datasets increases. When 1000 datasets of 2000 samples are being fit simultaneously, the GPU runs around 30 times as fast as the CPU.

Multiple datasets on EC2

Comparing performance of CPUs vs. GPUs is somewhat unsound; there is no obvious way to say “this CPU is equivalent to this GPU”. Most papers, including this one, compare performance using whatever hardware the author had available at the time (Gillespie, 2011). No effort was made to optimise the CPU implementation, while significant time was spent optimising the GPU implementation, an issue discussed in depth in (Lee et al., 2010).

Fortunately, services such as Amazon EC2 (Services, ??) provide an alternative way to compare the CPU and GPU approaches: cost of rental. For a given price, one will be able to rent a certain amount of hardware which will perform the desired computations in an amount of time. Both CPU and GPU time can be rented. A fairer way to compare the two technologies is the cost to perform your computation.

A summary of the machine configurations is available at Services (2015). For reference, ECUs are a measure of allocated CPU capacity. The US East region was selected as it is generally the lowest priced.

For the CPU implementation, we selected a c4.large instance as they provide the best price-performance ratio at the time of writing (eight ECUs and two CPU cores at USD$0.116/hour as of 2015-06-09). t2 instances are not suitable as they provide ‘burstable’ CPU performance; they are not intended for long-running jobs. This machine has two CPU cores, but the R implementation will only use one. As there are no dependencies between datasets, we will assume that additional CPU cores will provide a linear speedup (that is, with appropriate software, we could obtain double the performance with double the CPU cores). The rationale for this is explored further in the linear speedup assumption. Also note that pricing for c4 instances is close to constant per CPU core and ECU allocation; the cost-to-fit ought to remain constant regardless of instance choice.

NameNumber of CPU coresECU allocationPrice per hour (USD)
c4.large280.116
c4.xlarge4160.232
c4.2xlarge8310.464
c4.4xlarge16620.928
c4.8xlarge361321.856

For the GPU implementation, we chose a g2.x2large instance at USD$0.650/hour. Rephrasing this in terms of speedup ratios, the GPU implementation must achieve a 0.65/(0.1162)=11.2x speedup ratio in order to break even on cost.

As before, all datasets contain 2000 randomly generated samples.

Datasets (D)CPU timeGPU timeCPU costGPU cost
(seconds)(seconds)(USDx10-6)(USDx10-6)
10.089120.017081.443.08
100.873600.0694014.0712.53
1009.177840.63868147.86115.32
100084.449926.390721360.581153.88

From this, we can see that the GPU implementation is slightly more cost-effective than the CPU implementation for larger problems. The difference is not large and could probably be eliminated altogether with some optimisation work on the CPU implementation.

These prices differences may seem to be trivial (who cares about microcents?) but recall that use cases may include many more datasets (tens of thousands of datasets is the intended use case) and require random initialisation to achieve a good fit (100 random initialisations means 100 times as much work, and therefore cost). For 40,000 datasets and 100 random initialisations, the cost is around USD$5.44 using the CPU implementation and USD$4.62 using the GPU implementation.

For the large dataset test, we obtain the following results:

Samples (N)CPU timeGPU timeCPU costGPU cost
(seconds)(seconds)(USDx10-6)(USDx10-6)
1000.007240.016160.122.92
1,0000.052640.020320.853.67
10,0000.576280.032649.285.89
100,0006.037000.2256897.2640.75
1,000,00050.117642.25400807.45406.97

For sufficiently large problems, the GPU instances can perform the model fits at roughly half the price.

References

Amazon Web Services. Amazon EC2. URL http://aws.amazon.com/ec2/.

Amazon Web Services. Amazon EC2 pricing, 2015. URL http://aws.amazon.com/ec2/pricing/.

C Gillespie. Reviewing a paper that uses GPUs, July 2011. URL https://csgillespie.wordpress.com/2011/07/12/how-to-review-a-gpu-statistics-paper/.

VW Lee, C Kim, J Chhugani, M Deisher, D Kim, AD Nguyen, N Satish, M Smelyanskiy, S Chennupaty, P Hammarlund, R Singhal, and P Dubey. Debunking the 100x GPU vs. CPU myth: an evaluation of throughput computing on CPU and GPU. In ISCA ’10 Proceedings of the 37th Annual International Symposium on Computer Architecture, pages 451–460. ACM, 2010.


comments powered by Disqus