No:
- CSV isn’t good for anything unless you exactly specify the dialect. CSV is unstandardized, so you can’t parse arbitrary CSV files correctly.
- you don’t have to serialize tables to JSON in the “list of named records” format
Just user Zarr or so for array data. A table with more than 200 rows isn’t ”human readable” anyway.




Exactly. I’ve seen so much data destroyed silently deep in some bioinformatics pipeline due to this that I’ve just become an anti CSV advocate.
Use literally anything else that doesn’t need out of band “I’m using this dialect” information that has to match to prevent data loss.