i am currently working on a combined request that runs on the API-end of Overpass-Turbo: the aim is to concatenate a request like the following;

[out:csv(::id,::type,"name","addr:postcode","addr:city","addr:street","addr:housenumber","website"," contact:email=*")][timeout:600];
area["ISO3166-1"="NL"]->.a;
( node(area.a)[amenity=childcare];
  way(area.a)[amenity=childcare];
  rel(area.a)[amenity=childcare];);
out;

with the key of the ISO3166-1 - see https://de.wikipedia.org/wiki/ISO-3166-1-Kodierliste note - i want to run e.g. in Python - encoded with the various country-codes for

Netherlands, Germany, Austria, Switzerland, France, 

and so forth - how to encode this - so that we can run all in a single request - in python.. so that all comes in a dataframe - in comma separated values

Well i think that to concatenate multiple requests with different ISO3166-1 country codes and run them in a single request in Python, we need to construct a loop to iterate over the above mentioned country codes, modify the request accordingly, and then merge the results into one complete and single DataFrame. Running such a requests library to make HTTP requests and using pandas to handle the data would be appropiate to get this done:

import requests
import pandas as pd
from io import StringIO

# List of ISO3166-1 country codes
country_codes = ["NL", "DE", "AT", "CH", "FR"]  # Add more country codes as needed

# Base request template
base_request = """
[out:csv(::id,::type,"name","addr:postcode","addr:city","addr:street","addr:housenumber","website"," contact:email=*")][timeout:600];
area["ISO3166-1"="{}"]->.a;
( node(area.a)[amenity=childcare];
  way(area.a)[amenity=childcare];
  rel(area.a)[amenity=childcare];);
out;
"""

# List to store individual DataFrames
dfs = []

# Loop through each country code
for code in country_codes:
    # Construct the request for the current country
    request = base_request.format(code)
    
    # Send the request to the Overpass API
    response = requests.post("https://overpass-api.de/api/interpreter", data=request)
    
    # Check if the request was successful
    if response.status_code == 200:
        # Parse the response as CSV and convert it to DataFrame
        try:
            df = pd.read_csv(StringIO(response.text), error_bad_lines=False)
        except pd.errors.ParserError as e:
            print(f"Error parsing CSV data for {code}: {e}")
            continue
        
        # Add country code as a new column
        df['country_code'] = code
        
        # Append the DataFrame to the list
        dfs.append(df)
    else:
        print(f"Error retrieving data for {code}")

# Merge all DataFrames into a single DataFrame
result_df = pd.concat(dfs, ignore_index=True)

# Save the DataFrame to a CSV file or perform further processing
result_df.to_csv("merged_childcare_data.csv", index=False)

i run this on Google-Colab:

well - i wanted to achieve that this code:

a. gets the the country_codes - i.e. that it contains the ISO3166-1 country codes for the countries we want to query. b. base_request; which should be the base template of our Overpass API request with a placeholder {} for the corresponding country code.

Looping: The loop it should iterate over each country code, modifies the base request with the current country code, and subsequently send the request and finally parses the response into a DataFrame, and appends it to the dfs list.

well i wanted all to do finally one thing: all DataFrames in dfs should be concatenated into a single DataFrame result_df, which we can then save to a CSV file or further process as needed.

but well - at the moment i run into some errors- which i got back on google-colab. see here

<ipython-input-3-67ee61d1e734>:33: FutureWarning: The error_bad_lines argument has been deprecated and will be removed in a future version. Use on_bad_lines in the future.


  df = pd.read_csv(StringIO(response.text), error_bad_lines=False)
Skipping line 337: expected 1 fields, saw 2
Skipping line 827: expected 1 fields, saw 2

<ipython-input-3-67ee61d1e734>:33: FutureWarning: The error_bad_lines argument has been deprecated and will be removed in a future version. Use on_bad_lines in the future.


  df = pd.read_csv(StringIO(response.text), error_bad_lines=False)
Skipping line 27: expected 1 fields, saw 2
Skipping line 132: expected 1 fields, saw 2
Skipping line 366: expected 1 fields, saw 2
Skipping line 539: expected 1 fields, saw 2
Skipping line 633: expected 1 fields, saw 2
Skipping line 881: expected 1 fields, saw 2
Skipping line 1394: expected 1 fields, saw 2
Skipping line 1472: expected 1 fields, saw 2
Skipping line 1555: expected 1 fields, saw 4
Skipping line 1580: expected 1 fields, saw 2
Skipping line 1630: expected 1 fields, saw 2
Skipping line 1649: expected 1 fields, saw 2
Skipping line 1766: expected 1 fields, saw 2
Skipping line 1843: expected 1 fields, saw 2
Skipping line 2067: expected 1 fields, saw 2
Skipping line 2208: expected 1 fields, saw 2
Skipping line 2349: expected 1 fields, saw 3
Skipping line 2414: expected 1 fields, saw 2
Skipping line 2419: expected 1 fields, saw 2
Skipping line 2423: expected 1 fields, saw 2
Skipping line 2464: expected 1 fields, saw 2
Skipping line 2515: expected 1 fields, saw 2
Skipping line 2581: expected 1 fields, saw 2
Skipping line 2855: expected 1 fields, saw 2
Skipping line 2899: expected 1 fields, saw 2
Skipping line 2950: expected 1 fields, saw 2

<ipython-input-3-67ee61d1e734>:33: FutureWarning: The error_bad_lines argument has been deprecated and will be removed in a future version. Use on_bad_lines in the future.


  df = pd.read_csv(StringIO(response.text), error_bad_lines=False)
<ipython-input-3-67ee61d1e734>:33: FutureWarning: The error_bad_lines argument has been deprecated and will be removed in a future version. Use on_bad_lines in the future.


  df = pd.read_csv(StringIO(response.text), error_bad_lines=False)
Skipping line 114: expected 1 fields, saw 2
Skipping line 212: expected 1 fields, saw 2
Skipping line 339: expected 1 fields, saw 2
Skipping line 340: expected 1 fields, saw 4
Skipping line 351: expected 1 fields, saw 3
Skipping line 357: expected 1 fields, saw 2
Skipping line 359: expected 1 fields, saw 3
Skipping line 510: expected 1 fields, saw 6
Skipping line 535: expected 1 fields, saw 2
Skipping line 546: expected 1 fields, saw 3
Skipping line 590: expected 1 fields, saw 4
Skipping line 596: expected 1 fields, saw 4
Skipping line 602: expected 1 fields, saw 3
Skipping line 659: expected 1 fields, saw 3
Skipping line 764: expected 1 fields, saw 2
Skipping line 836: expected 1 fields, saw 2
Skipping line 838: expected 1 fields, saw 2

<ipython-input-3-67ee61d1e734>:33: FutureWarning: The error_bad_lines argument has been deprecated and will be removed in a future version. Use on_bad_lines in the future.


  df = pd.read_csv(StringIO(response.text), error_bad_lines=False)
Skipping line 50: expected 1 fields, saw 3
Skipping line 302: expected 1 fields, saw 2
Skipping line 303: expected 1 fields, saw 2
Skipping line 740: expected 1 fields, saw 2
Skipping line 758: expected 1 fields, saw 2
Skipping line 1440: expected 1 fields, saw 2
Skipping line 1476: expected 1 fields, saw 3
Skipping line 1680: expected 1 fields, saw 3
Skipping line 1687: expected 1 fields, saw 2
Skipping line 1954: expected 1 fields, saw 3
1

There are 1 best solutions below

0
zero On

hi there dear ouroboros1

due to your help i was encouraged to go on

import requests
import pandas as pd
from io import StringIO

# List of ISO3166-1 country codes
country_codes = ["NL", "DE", "AT", "CH", "FR"]  # Add more country codes as needed

# Base request template
base_request = """
[out:csv(::id,::type,"name","addr:postcode","addr:city","addr:street","addr:housenumber","website"," contact:email=*")][timeout:600];
area["ISO3166-1"="{}"]->.a;
( node(area.a)[amenity=childcare];
  way(area.a)[amenity=childcare];
  rel(area.a)[amenity=childcare];);
out;
"""

# List to store individual DataFrames
dfs = []

# Loop through each country code
for code in country_codes:
    # Construct the request for the current country
    request = base_request.format(code)
    
    # Send the request to the Overpass API
    response = requests.post("https://overpass-api.de/api/interpreter", data=request)
    
    # Check if the request was successful
    if response.status_code == 200:
        # Parse the response as CSV and convert it to DataFrame
        try:
            df = pd.read_csv(StringIO(response.text), sep='\t')
        except pd.errors.ParserError as e:
            print(f"Error parsing CSV data for {code}: {e}")
            continue
        
        # Add country code as a new column
        df['country_code'] = code
        
        # Append the DataFrame to the list
        dfs.append(df)
    else:
        print(f"Error retrieving data for {code}")

# Merge all DataFrames into a single DataFrame
result_df = pd.concat(dfs, ignore_index=True)

# Save the DataFrame to a CSV file or perform further processing
result_df.to_csv("merged_childcare_data.csv", index=False)

solved the issues got back 570 kb data -

thanks alot