Module/Package resolution in Python

105 Views Asked by At

So I have a project directory "dataplatform" and its contents are the follows:

    ── dataplatform
        ├── __init__.py
        ├── commons
        │   ├── __init__.py
        │   ├── __pycache__
        │   │   ├── __init__.cpython-38.pyc
        │   │   ├── get_partitions.cpython-38.pyc
        │   │   └── packages.cpython-38.pyc
        │   ├── get_partitions.py
        │   ├── packages.py
        │   ├── pipeline
        │   │   ├── __init__.py
        │   │   └── pipeline.py
        │   └── spark_logging.py
        ├── pipelines
        │   ├── __init__.py
        │   └── batch
        │       ├── ETL.py
        │       ├── ReadMe.rst
        │       └── main.py
        └── requirement.txt

I have two questions here:

  1. In pipelines package, I try to import modules from commons package in the main.py module, by saying from dataplatform.commons import * .However, the IDE(Pycharm) immediately throws up an error saying that it is not permitted,as it cannot find the package dataplatform. However, dataplatform here has init.py , and is therefore a package that has commons as a sub-package. What could be going wrong there? When I replace the above import statement with from commons import * , it works just fine.

  2. Now, the project working directory : When I execute the main.py script from the the dataplatform directory by passing the complete path of the main.py file to the python3 executable, it refuses to execute, with the same import error being thrown as below:

    File "pipelines/batch/main.py", line 2, in from dataplatform.commons import * ModuleNotFoundError: No module named 'dataplatform'

I would like to know as to what must be the root directory (working directory) from which I should try executing the main file (on my local machine) so that the main.py file will execute successfully.

I am keen on keeping the dataplatform package appended to every subpackage name I use in the code, as the environment on which I am running this is Hadoop Sandbox (HDP 3.1) , and for some unknown reasons, appending the dataplatform package name is required to load files from HDFS successfully (The code is zipped and stored on HDFS; call to the main.py executes the whole program correctly somehow).

Note: Using sys.path.append is not an option.

1

There are 1 best solutions below

3
phibel On

Do I understand you correctly, that you need from dataplatform.commons import * in main.py for it to work in Hadoop Sandbox? You could set up a PyCharm Project above your dataplatform folder, see my example project structure below. The hidden .idea folder contains the PyCharm project settings.

├── dataplatform
│   ├── commons
│   │   ├── get_partitions.py
│   │   ├── __init__.py
│   │   ├── packages.py
│   │   ├── pipeline
│   │   │   ├── __init__.py
│   │   │   └── pipeline.py
│   │   └── spark_logging.py
│   ├── __init__.py
│   ├── pipelines
│   │   ├── batch
│   │   │   ├── ETL.py
│   │   │   ├── main.py
│   │   │   └── ReadMe.rst
│   │   └── __init__.py
│   └── requirement.txt
└── .idea
    ├── .gitignore
    ├── inspectionProfiles
    │   └── profiles_settings.xml
    ├── misc.xml
    ├── modules.xml
    ├── stackoverflow.iml
    └── workspace.xml

Now you can use a import like from dataplatform.commons import * in main.py. Because PyCharm will append the project folder to sys.path.

Alternatively you can have the PyCharm project directory somewhere else and add the path to the dataplatform folder. File > Settings... > Project: PROJECTNAME > Project Structure ... on the right side you can add folder.