TypeError: string indices must be integers, not 'str'. how to fix this in this particular case?

216 Views Asked by At

I am trying to bring data of the stocks I have typed in everything the course instructor told me to, but its not working. seems like a problem with my software. I have asked chatgpt about it, they say to update pandas data reader.

this is what I tried

import pandas as pd
import pandas_datareader.data as pdr

stock='SPY'

source='yahoo'

startdate='2022-01-01'

enddate='2022-01-31'

stocks_df = pdr.DataReader(stock,source,startdate,enddate)

print(stocks_df)

and this is what I got

TypeError                                 Traceback (most recent call last)
Cell In[23], line 8
      5 startdate='2022-01-01'
      6 enddate='2022-01-31'
----> 8 stocks_df = pdr.DataReader(stock,source,startdate,enddate)
     10 print(stocks_df)

File ~/anaconda3/lib/python3.11/site-packages/pandas/util/_decorators.py:210, in deprecate_kwarg.<locals>._deprecate_kwarg.<locals>.wrapper(*args, **kwargs)
    208         raise TypeError(msg)
    209     kwargs[new_arg_name] = new_arg_value
--> 210 return func(*args, **kwargs)

File ~/anaconda3/lib/python3.11/site-packages/pandas_datareader/data.py:379, in DataReader(name, data_source, start, end, retry_count, pause, session, api_key)
    367     raise NotImplementedError(msg)
    369 if data_source == "yahoo":
    370     return YahooDailyReader(
    371         symbols=name,
    372         start=start,
    373         end=end,
    374         adjust_price=False,
    375         chunksize=25,
    376         retry_count=retry_count,
    377         pause=pause,
    378         session=session,
--> 379     ).read()
    381 elif data_source == "iex":
    382     return IEXDailyReader(
    383         symbols=name,
    384         start=start,
   (...)
    390         session=session,
    391     ).read()

File ~/anaconda3/lib/python3.11/site-packages/pandas_datareader/base.py:253, in _DailyBaseReader.read(self)
    251 # If a single symbol, (e.g., 'GOOG')
    252 if isinstance(self.symbols, (string_types, int)):
--> 253     df = self._read_one_data(self.url, params=self._get_params(self.symbols))
    254 # Or multiple symbols, (e.g., ['GOOG', 'AAPL', 'MSFT'])
    255 elif isinstance(self.symbols, DataFrame):

File ~/anaconda3/lib/python3.11/site-packages/pandas_datareader/yahoo/daily.py:153, in YahooDailyReader._read_one_data(self, url, params)
    151 try:
    152     j = json.loads(re.search(ptrn, resp.text, re.DOTALL).group(1))
--> 153     data = j["context"]["dispatcher"]["stores"]["HistoricalPriceStore"]
    154 except KeyError:
    155     msg = "No data fetched for symbol {} using {}"

TypeError: string indices must be integers, not 'str'
1

There are 1 best solutions below

0
On

If you're facing issues with the pandas_datareader for fetching stock data, try to web scrape using BeautifulSoup. Then store the scraped data in a CSV file. Make sure to create a folder called "Data" and add a file called "Stocks.csv". After reading these detailed instructions, look at the sample code provided at the end of the response.

  1. Web Scraping: First, identify a website that displays the stock information you need, I recommend Yahoo Finance because it is easy to scrape. Use Python's from urllib.request import Request, urlopen library to fetch the page content. Then, use 'from bs4 import BeautifulSoup' to parse the HTML and extract the required data, such as stock prices, dates, etc. BeautifulSoup allows you to navigate and search the structure of the web page, making it easier to locate the data you need.

  2. Storing Data in a CSV File: Once you have scraped the data, you can organize it in Python using lists or dictionaries. Then, use Python's csv module to write this data into a CSV file. This involves creating a csv.writer object and using methods like writerow() or writerows() to write the data. Make sure to create a folder called "Data" and add a file called "Stocks.csv".

  3. Code Structure: Your Python script will typically start by importing necessary libraries (requests, BeautifulSoup, and csv). It will then have different functions for fetching and parsing the webpage, followed by code to extract the needed data, and finally, a part that writes this data to a CSV file.

This is a sample code on how to do it. This code is scraping Trending Tickers from Yahoo Finance, click the link below to view the image:

Sample Code