Can't read unicode (japanese) from a file

5.1k Views Asked by At

Hi I have a file containing japanese text, saved as unicode file.

I need to read from the file and display the information to the stardard output.

I am using Visual studio 2008

int main()   
{  
      wstring line;  
      wifstream myfile("D:\sample.txt"); //file containing japanese characters, saved as unicode file  
      //myfile.imbue(locale("Japanese_Japan"));  
      if(!myfile)  
            cout<<"While opening a file an error is encountered"<<endl;  
      else  
            cout << "File is successfully opened" << endl;  
      //wcout.imbue (locale("Japanese_Japan"));  
      while ( myfile.good() )  
      {  
            getline(myfile,line);  
            wcout << line << endl;  
      }  
      myfile.close();  
      system("PAUSE");  
      return 0;  
}  

This program generates some random output and I don't see any japanese text on the screen.

4

There are 4 best solutions below

0
On

Someone here had the same problem with Russian characters (He's using basic_ifstream<wchar_t> wich should be the same as wifstream according to this page). In the comments of that question they also link to this which should help you further.

If understood everything correctly, it seems that wifstream reads the characters correctly but your program tries to convert them to whatever locale your program is running in.

0
On

Two errors:

std::wifstream(L"D:\\sample.txt");

And do not mix cout and wcout.

Also check that your file is encoded in UTF-16, Little-Endian. If not so, you will be in trouble reading it.

2
On

wfstream uses wfilebuf for the actual reading and writing of the data. wfilebuf defaults to using a char buffer internally which means that the text in the file is assumed narrow, and converted to wide before you see it. Since the text was actually wide, you get a mess.

The solution is to replace the wfilebuf buffer with a wide one.

You probably also need to open the file as binary.

const size_t bufsize = 128;
wchar_t buffer[bufsize];
wifstream myfile("D:\\sample.txt", ios::binary);
myfile.rdbuf()->pubsetbuf(buffer, 128);

Make sure the buffer outlives the stream object!

See details here: http://msdn.microsoft.com/en-us/library/tzf8k3z8(v=VS.80).aspx

3
On

Oh boy. Welcome to the Fun, Fun world of character encodings.

The first thing you need to know is that your console is not unicode on windows. The only way you'll ever see Japanese characters in a console application is if you set your non-unicode (ANSI) locale to Japanese. Which will also make backslashes look like yen symbols and break paths containing european accented characters for programs using the ANSI Windows API (which was supposed to have been deprecated when Windows XP came around, but people still use to this day...)

So first thing you'll want to do is build a GUI program instead. But I'll leave that as an exercise to the interested reader.

Second, there are a lot of ways to represent text. You first need to figure out the encoding in use. Is is UTF-8? UTF-16 (and if so, little or big endian?) Shift-JIS? EUC-JP? You can only use a wstream to read directly if the file is in little-endian UTF-16. And even then you need to futz with its internal buffer. Anything other than UTF-16 and you'll get unreadable junk. And this is all only the case on Windows as well! Other OSes may have a different wstream representation. It's best not to use wstreams at all really.

So, let's assume it's not UTF-16 (for full generality). In this case you must read it as a char stream - not using a wstream. You must then convert this character string into UTF-16 (assuming you're using windows! Other OSes tend to use UTF-8 char*s). On windows this can be done with MultiByteToWideChar. Make sure you pass in the right code page value, and CP_ACP or CP_OEMCP are almost always the wrong answer.

Now, you may be wondering how to determine which code page (ie, character encoding) is correct. The short answer is you don't. There is no prima facie way of looking at a text string and saying which encoding it is. Sure, there may be hints - eg, if you see a byte order mark, chances are it's whatever variant of unicode makes that mark. But in general, you have to be told by the user, or make an attempt to guess, relying on the user to correct you if you're wrong, or you have to select a fixed character set and don't attempt to support any others.