[Django]-CSV new-line character seen in unquoted field error

186πŸ‘

βœ…

It’ll be good to see the csv file itself, but this might work for you, give it a try, replace:

file_read = csv.reader(self.file)

with:

file_read = csv.reader(self.file, dialect=csv.excel_tab)

Or, open a file with universal newline mode and pass it to csv.reader, like:

reader = csv.reader(open(self.file, 'rU'), dialect=csv.excel_tab)

Or, use splitlines(), like this:

def read_file(self):
    with open(self.file, 'r') as f:
        data = [row for row in csv.reader(f.read().splitlines())]
    return data
πŸ‘€alecxe

55πŸ‘

I realize this is an old post, but I ran into the same problem and don’t see the correct answer so I will give it a try

Python Error:

_csv.Error: new-line character seen in unquoted field

Caused by trying to read Macintosh (pre OS X formatted) CSV files. These are text files that use CR for end of line. If using MS Office make sure you select either plain CSV format or CSV (MS-DOS). Do not use CSV (Macintosh) as save-as type.

My preferred EOL version would be LF (Unix/Linux/Apple), but I don’t think MS Office provides the option to save in this format.

πŸ‘€g.kovatchev

33πŸ‘

For Mac OS X, save your CSV file in β€œWindows Comma Separated (.csv)” format.

19πŸ‘

If this happens to you on mac (as it did to me):

  1. Save the file as CSV (MS-DOS Comma-Separated)
  2. Run the following script

    with open(csv_filename, 'rU') as csvfile:
        csvreader = csv.reader(csvfile)
        for row in csvreader:
            print ', '.join(row)
    
πŸ‘€Nimo

5πŸ‘

Try to run dos2unix on your windows imported files first

2πŸ‘

This is an error that I faced. I had saved .csv file in MAC OSX.

While saving, save it as β€œWindows Comma Separated Values (.csv)” which resolved the issue.

πŸ‘€Suraj

1πŸ‘

This worked for me on OSX.

# allow variable to opened as files
from io import StringIO

# library to map other strange (accented) characters back into UTF-8
from unidecode import unidecode

# cleanse input file with Windows formating to plain UTF-8 string
with open(filename, 'rb') as fID:
    uncleansedBytes = fID.read()
    # decode the file using the correct encoding scheme
    # (probably this old windows one) 
    uncleansedText = uncleansedBytes.decode('Windows-1252')

    # replace carriage-returns with new-lines
    cleansedText = uncleansedText.replace('\r', '\n')

    # map any other non UTF-8 characters into UTF-8
    asciiText = unidecode(cleansedText)

# read each line of the csv file and store as an array of dicts, 
# use first line as field names for each dict. 
reader = csv.DictReader(StringIO(cleansedText))
for line_entry in reader:
    # do something with your read data 
πŸ‘€Resonance

1πŸ‘

I know this has been answered for quite some time but not solve my problem. I am using DictReader and StringIO for my csv reading due to some other complications. I was able to solve problem more simply by replacing delimiters explicitly:

with urllib.request.urlopen(q) as response:
    raw_data = response.read()
    encoding = response.info().get_content_charset('utf8') 
    data = raw_data.decode(encoding)
    if '\r\n' not in data:
        # proably a windows delimited thing...try to update it
        data = data.replace('\r', '\r\n')

Might not be reasonable for enormous CSV files, but worked well for my use case.

πŸ‘€Dougyfresh

0πŸ‘

Alternative and fast solution : I faced the same error. I reopened the β€œwierd” csv file in GNUMERIC on my lubuntu machine and exported the file as csv file. This corrected the issue.

πŸ‘€p699

Leave a comment