TL;DR
Here's an example repository that is set up as described in the first diagram (below): https://github.com/Poddster/package_problems
If you could please make it look like the second diagram in terms of project organisation and can still run the following commands, then you've answered the question:
$ git clone https://github.com/Poddster/package_problems.git
$ cd package_problems
<do your magic here>
$ nosetests
$ ./my_tool/my_tool.py
$ ./my_tool/t.py
$ ./my_tool/d.py
(or for the above commands, $ cd ./my_tool/ && ./my_tool.py is also acceptable)
Alternatively: Give me a different project structure that allows me to group together related files ('package'), run all of the files individually, import the files into other files in the same package, and import the packages/files into other package's files.
Current situation
I have a bunch of python files. Most of them are useful when callable from the command line i.e. they all use argparse and if __name__ == "__main__"
to do useful things.
Currently I have this directory structure, and everything is working fine:
.
├── config.txt
├── docs/
│ ├── ...
├── my_tool.py
├── a.py
├── b.py
├── c.py
├── d.py
├── e.py
├── README.md
├── tests
│ ├── __init__.py
│ ├── a.py
│ ├── b.py
│ ├── c.py
│ ├── d.py
│ └── e.py
└── resources
├── ...
Some of the scripts import
things from other scripts to do their work. But no script is merely a library, they are all invokable. e.g. I could invoke ./my_tool.py
, ./a.by
, ./b.py
, ./c.py
etc and they would do useful things for the user.
"my_tool.py" is the main script that leverages all of the other scripts.
What I want to happen
However I want to change the way the project is organised. The project itself represents an entire program useable by the user, and will be distributed as such, but I know that parts of it will be useful in different projects later so I want to try and encapsulate the current files into a package. In the immediate future I will also add other packages to this same project.
To facilitate this I've decided to re-organise the project to something like the following:
.
├── config.txt
├── docs/
│ ├── ...
├── my_tool
│ ├── __init__.py
│ ├── my_tool.py
│ ├── a.py
│ ├── b.py
│ ├── c.py
│ ├── d.py
│ ├── e.py
│ └── tests
│ ├── __init__.py
│ ├── a.py
│ ├── b.py
│ ├── c.py
│ ├── d.py
│ └── e.py
├── package2
│ ├── __init__.py
│ ├── my_second_package.py
| ├── ...
├── README.md
└── resources
├── ...
However, I can't figure out an project organisation that satisfies the following criteria:
- All of the scripts are invokable on the command line (either as
my_tool\a.py
orcd my_tool && a.py
) - The tests actually run :)
- Files in package2 can do
import my_tool
The main problem is with the import statements used by the packages and the tests.
Currently, all of the packages, including the tests, simply do import <module>
and it's resolved correctly. But when jiggering things around it doesn't work.
Note that supporting py2.7 is a requirement so all of the files have from __future__ import absolute_import, ...
at the top.
What I've tried, and the disastrous results
1
If I move the files around as shown above, but leave all of the import statements as they currently are:
$ ./my_tool/*.py
works and they all run properly$ nosetests
run from the top directory doesn't work. The tests fail to import the packages scripts.- pycharm highlights import statements in red when editing those files :(
2
If I then change the test scripts to do:
from my_tool import x
$ ./my_tool/*.py
still works and they all run properly$ nosetests
run from the top directory doesn't work. Then tests can import the correct scripts, but the imports in the scripts themselves fail when the test scripts import them.- pycharm highlights import statements in red in the main scripts still :(
3
If I keep the same structure and change everything to be from my_tool import
then:
$ ./my_tool/*.py
results inImportError
s$ nosetests
runs everything ok.- pycharm doesn't complain about anything
e.g. of 1.:
Traceback (most recent call last):
File "./my_tool/a.py", line 34, in <module>
from my_tool import b
ImportError: cannot import name b
4
I also tried from . import x
but that just ends up with ValueError: Attempted relative import in non-package
for the direct running of scripts.
Looking at some other SO answers:
I can't just use python -m pkg.tests.core_test
as
a) I don't have main.py. I guess I could have one?
b) I want to be able to run all of the scripts, not just main?
I've tried:
if __name__ == '__main__' and __package__ is None:
from os import sys, path
sys.path.append(path.dirname(path.dirname(path.abspath(__file__))))
but it didn't help.
I also tried:
__package__ = "my_tool"
from . import b
But received:
SystemError: Parent module 'loading_tool' not loaded, cannot perform relative import
adding import my_tool
before from . import b
just ends up back with ImportError: cannot import name b
Fix?
What's the correct set of magical incantations and directory layout to make all of this work?
To run it from both command line and act like library while allowing nosetest to operate in a standard manner, I believe you will have to do a double up approach on Imports.
For example, the Python files will require:
I went through and made a PR off the github you linked with all test cases working.
https://github.com/Poddster/package_problems/pull/1
Edit: Forgot the imports in
__init__.py
to be properly usable in other packages, added. Now should be able to do:Point 1
I believe it's working, so I don't comment on it.
Point 2
I always used tests at the same level as my_tool, not below it, but they should work if you do this at the top of each tests files (before importing my_tool or any other py file in the same directory)
Point 3
In my_second_package.py do this at the top (before importing my_tool)
Best regards,
JM
Once you move to your desired configuration, the absolute imports you are using to load the modules that are specific to
my_tool
no longer work.You need three modifications after you create the
my_tool
subdirectory and move the files into it:Create
my_tool/__init__.py
. (You seem to already do this but I wanted to mention it for completeness.)In the files directly under in
my_tool
: change theimport
statements to load the modules from the current package. So inmy_tool.py
change:to:
You need to make a similar change to all your other files. (You mention having tried setting
__package__
and then doing a relative import but setting__package__
is not needed.)In the files located in
my_tool/tests
: change theimport
statements that import the code you want to test to relative imports that load from one package up in the hierarchy. So intest_my_tool.py
change:to:
Similarly for all the other test files.
With the modifications above, I can run modules directly:
and I can run tests:
Note that I can run the above both with Python 2.7 and Python 3.
Rather than make the various modules under
my_tool
be directly executable, I suggest using a propersetup.py
file to declare entry points and letsetup.py
create these entry points when the package is installed. Since you intend to distribute this code, you should use asetup.py
to formally package it anyway.Modify the modules that can be invoked from the command line so that, taking
my_tool/my_tool.py
as example, instead of this:You have:
Create a
setup.py
file that contains the properentry_points
. For instance:The file above instructs
setup.py
to create a script namedmy_tool
that will invoke themain
method in the modulemy_tool.my_tool
. On my system, once the package is installed, there is a script located at/usr/local/bin/my_tool
that invokes themain
method inmy_tool.my_tool
. It produces the same output as runningpython -m my_tool.my_tool
, which I've shown above.