So I have a project directory "dataplatform" and its contents are the follows:
── dataplatform
├── __init__.py
├── commons
│ ├── __init__.py
│ ├── __pycache__
│ │ ├── __init__.cpython-38.pyc
│ │ ├── get_partitions.cpython-38.pyc
│ │ └── packages.cpython-38.pyc
│ ├── get_partitions.py
│ ├── packages.py
│ ├── pipeline
│ │ ├── __init__.py
│ │ └── pipeline.py
│ └── spark_logging.py
├── pipelines
│ ├── __init__.py
│ └── batch
│ ├── ETL.py
│ ├── ReadMe.rst
│ └── main.py
└── requirement.txt
I have two questions here:
In pipelines package, I try to import modules from commons package in the main.py module, by saying
from dataplatform.commons import *.However, the IDE(Pycharm) immediately throws up an error saying that it is not permitted,as it cannot find the package dataplatform. However, dataplatform here has init.py , and is therefore a package that has commons as a sub-package. What could be going wrong there? When I replace the above import statement withfrom commons import *, it works just fine.Now, the project working directory : When I execute the main.py script from the the dataplatform directory by passing the complete path of the main.py file to the python3 executable, it refuses to execute, with the same import error being thrown as below:
File "pipelines/batch/main.py", line 2, in from dataplatform.commons import * ModuleNotFoundError: No module named 'dataplatform'
I would like to know as to what must be the root directory (working directory) from which I should try executing the main file (on my local machine) so that the main.py file will execute successfully.
I am keen on keeping the dataplatform package appended to every subpackage name I use in the code, as the environment on which I am running this is Hadoop Sandbox (HDP 3.1) , and for some unknown reasons, appending the dataplatform package name is required to load files from HDFS successfully (The code is zipped and stored on HDFS; call to the main.py executes the whole program correctly somehow).
Note: Using sys.path.append is not an option.
Do I understand you correctly, that you need
from dataplatform.commons import *inmain.pyfor it to work in Hadoop Sandbox? You could set up a PyCharm Project above yourdataplatformfolder, see my example project structure below. The hidden.ideafolder contains the PyCharm project settings.Now you can use a import like
from dataplatform.commons import *inmain.py. Because PyCharm will append the project folder to sys.path.Alternatively you can have the PyCharm project directory somewhere else and add the path to the
dataplatformfolder. File > Settings... > Project: PROJECTNAME > Project Structure ... on the right side you can add folder.