I've seen some reports PyCharm is slow but I'm having an issue that seems that's too slow even compared to normal operation.
I have a big set of data in a pandas dataframe (read from a 440 MB csv file).
When I'm using the ipython console inside PyCharm, every time I try to handle that data, let's say, I write my_data.
it just hangs there for about 30 seconds.
I don't really understand what is going on, but it seems PyCharm is going trough all the data to find some smart auto completion (which is a really dumb thing to do).
Any way to deactivate this behavior?
I have been facing the same issue for a long time as well: PyCharm debugging is extremely slow when using large pandas dataframes. If I want to view the contents of a dataframe in the Watches is often gives me a time out after waiting for minutes, so I basically stopped using the debug when working with dataframes
What I just found however, it that under
Files -> Settinggs -> Build, Execution, Deployment -> Python Debugger
You have to switch on the "Gevent Compatible" flag
I also have switched on all the other flags in this window (Collect rum-time types information, attach subprocess, and PyQt compatible). But the Gevent compatible flag really does the job for me: it suddenly instantanuously gives me the dataframe contents in the debugger watches.
I am using PyCharm verion 2016.2.3
If you don't mind disabling the autocompletion completely, I think this should work:
Go to
File > Settings (Ctrl+Alt+S) > IDE Settings > Editor > Code Completion
And turn off
Insert selected variant by typing dot, space, etc.
Depending on how much you handle the data could hit memory limits. Ipython remembers each In []: and Out []:.
In
is a list that is appended to for each think you input.Out
is another list that gets appended to.So if you have a very large array you are working with in
In
orOut
You will get several copies of the array.