My main problem is issued here. Since no one has given a solution yet, I have decided to find a workaround. I am looking for a way to limit a python scripts CPU usage (not priority but the number of CPU cores) with python code. I know I can do that with multiprocessing library (pool, etc.) but I am not the one who is running it with multiprocessing. So, I don't know how to that. And also I could do that via terminal but this script is being imported by another script. Unfortunately, I don't have the luxury of calling it through terminal.
tl;dr: How to limit CPU usage (number of cores) of a python script, which is being imported by another script and I don't even know why it runs in parallel, without running it via terminal. (Please check the code snippet below and yes English is not my native language)
The code snippet causing the issue:
from sklearn.datasets import load_digits
from sklearn.decomposition import IncrementalPCA
import numpy as np
X, _ = load_digits(return_X_y=True)
#Copy-paste and increase the size of the dataset to see the behavior at htop.
for _ in range(8):
X = np.vstack((X, X))
print(X.shape)
transformer = IncrementalPCA(n_components=7, batch_size=200)
#PARTIAL FIT RUNS IN PARALLEL! GOD WHY?
---------------------------------------
transformer.partial_fit(X[:100, :])
---------------------------------------
X_transformed = transformer.fit_transform(X)
print(X_transformed.shape)
Versions:
- Python 3.6
- joblib 0.13.2
- scikit-learn 0.20.2
- numpy 1.16.2
UPDATE: Doesn't work. Thank you for clarification @Darkonaut . The sad thing is, I already knew this wouldn't work and I already clearly stated on the question title but people don't read I guess.
I guess I am doing it wrong. I've updated the code snippet based on the @Ben Chaliah Ayoub answer. Nothing seems to be changed. And also I want to point out to something: I am not trying to run this code on multiple cores. This line transformer.partial_fit(X[:100, :])
running on multiple cores (for some reason) and it doesn't have n_jobs
or anything. Also please note that my first example and my original code is not initialized with a pool or something similar. I can't set the number of cores in the first place (Because there is no such place). But now there is a place for it but it is still running on multiple cores. Feel free to test it yourself. (Code below) That's why I am looking for a workaround.
from sklearn.datasets import load_digits
from sklearn.decomposition import IncrementalPCA
import numpy as np
from multiprocessing import Pool, cpu_count
def run_this():
X, _ = load_digits(return_X_y=True)
#Copy-paste and increase the size of the dataset to see the behavior at htop.
for _ in range(8):
X = np.vstack((X, X))
print(X.shape)
#This is the exact same example taken from sckitlearn's IncrementalPCA website.
transformer = IncrementalPCA(n_components=7, batch_size=200)
transformer.partial_fit(X[:100, :])
X_transformed = transformer.fit_transform(X)
print(X_transformed.shape)
pool= Pool(processes=1)
pool.apply(run_this)
UPDATE: So, I have tried to set blas threads using this in my code before importing numpy but it didn't work (again). Any other suggestions? The latest stage of code can be found below.
Credits: @Amir
from sklearn.datasets import load_digits
from sklearn.decomposition import IncrementalPCA
import os
os.environ["OMP_NUM_THREADS"] = "1" # export OMP_NUM_THREADS=1
os.environ["OPENBLAS_NUM_THREADS"] = "1" # export OPENBLAS_NUM_THREADS=1
os.environ["MKL_NUM_THREADS"] = "1" # export MKL_NUM_THREADS=1
os.environ["VECLIB_MAXIMUM_THREADS"] = "1" # export VECLIB_MAXIMUM_THREADS=1
os.environ["NUMEXPR_NUM_THREADS"] = "1" # export NUMEXPR_NUM_THREADS=1
import numpy as np
X, _ = load_digits(return_X_y=True)
#Copy-paste and increase the size of the dataset to see the behavior at htop.
for _ in range(8):
X = np.vstack((X, X))
print(X.shape)
transformer = IncrementalPCA(n_components=7, batch_size=200)
transformer.partial_fit(X[:100, :])
X_transformed = transformer.fit_transform(X)
print(X_transformed.shape)
I solved the problem in the example code given in the original question by setting BLAS environmental variables (from this link). But this is not the answer to my actual question. My first try (second update) was wrong. I needed to set the number of threads not before importing the numpy library but before the library (IncrementalPCA) importing the numpy.
So, what was the problem in the example code? It wasn't an actual problem but a feature of BLAS library used by numpy library. Trying to limit it with multiprocessing library didn't work because by default OpenBLAS is set to use all available threads.
Credits: @Amir and @Darkonaut Sources: OpenBLAS 1, OpenBLAS 2, Solution
But you can explicitly set the correct BLAS environment by checking which one is used by your numpy build like this:
**Gave these results... **
...meaning OpenBLAS is used by my numpy build. And all I need to write is
os.environ["OPENBLAS_NUM_THREADS"] = "2"
in order to limit thread usage by the numpy library.if you want to limit the number of cores used, then you use
multiprocessing.Pool
which produces a pool of worker processes based on the max number of cores available on your system, and then basically feeds tasks in as the cores become available :,also nice is a good approach to decrease the process priority:
If you want to ensure that the process don't take 100% of a core:
This will make your OS
schedule out
your process for0.01
seconds for each computation and makes room for other applications.Run you application with
taskset
. For example, to limit the number of used CPUs to 4 do: