Running Python script on CPU/GPU

Chandra sekar Veerappan
1 min readOct 13, 2023

--

Here is the quick test to run the python script in CPU and GPU. To make sure your workstation /laptop works well in the both environment. As mentioned here

from numba import jit, cuda 
import numpy as np
# to measure exec time
from timeit import default_timer as timer

# normal function to run on cpu
def func(a):
for i in range(10000000):
a[i]+= 1

# function optimized to run on gpu
@jit(target_backend='cuda')
def func2(a):
for i in range(10000000):
a[i]+= 1
if __name__=="__main__":
n = 10000000
a = np.ones(n, dtype = np.float64)

start = timer()
func(a)
print("without GPU:", timer()-start)

start = timer()
func2(a)
print("with GPU:", timer()-start)
without GPU: 0.7919333899999401
with GPU: 0.1830111740000575

The results is for my system.Running the Python 3.8.16 in Anaconda environment.

/home/user/anaconda3/envs/zeek/bin/python
3.8.16 | packaged by conda-forge | (default, Feb 1 2023, 16:01:55)
[GCC 11.3.0]
sys.version_info(major=3, minor=8, micro=16, releaselevel='final', serial=0)

Hope it it will be useful. Cheers ! :)

--

--