How can two PsychoPy .psydat files be combined into a .csv I can import into R? - psychopy

While running an experiment I made an error. If PsychoPy has correct participant codes, then the two conditions are combined into one .csv per session. (If not two .psydat files are created.) I know that .psydat is a format which contains more information than the csv format I need, but I'm not sure how to access the information which is there so it can be added to my dataframe and analysed in R. For example, I need to combine the data in MA22BI1.psydat and MAMA22BI1.psydat into MA22BI1.csv. I couldn't find anything remotely related in the PsychoPy forum. Thanks for any ideas!

You can regenerate a .csv file from a .psydat file using the code supplied here: psychopy.org/general/dataOutputs.html.

Related

Kettle - Two csv inputs into PostgreSQL output

I have a class project using Pentaho. I need to create a dashboard using 2 different inputs into a PostgreSQL output. My problem is, using Kettle, I have to match two different .csv files that go into the Postgres. One of the csv is about crimes, the other is about weather. I manually added two columns into the weather one, so they have two matching columns: 'Month' and 'Year'.
My question is how can I use this matching columns (or does doing that make any sense) so I can later create the dashboard and make queries like 'What crimes where committed when it was raining?'.
Sorry if I'm not very accurate, I'm a bit lost at using Pentaho. If anyone could give me some help I would be thankful.
If your intent is to join two CSV files, please check the Join step.

Import a CSV file into Access using VBA

I need to use VBA to import a large CSV excel file into an Access table. The delimiter is "" (double quotes) except for some reason the first value is followed by " (only one quote) instead of two like every other value. The first row contains the column headers and are delimited the same way. At the bottom I have attached an example.
The CSV files are generated automatically by an accounting system daily so I cannot change the format. They are also quite large (150,000+ lines, many columns). I'm fairly new to VBA, so as much detail as is possible would be much appreciated.
Thanks in advance!
Example of format
That doesn't sound like a CSV file. Can you open it in Excel, convert it to a true CSV, and then import that into Access? You will find many VBA-driven import options at the URL below.
http://www.accessmvp.com/KDSnell/EXCEL_Import.htm
Also, take a look at these URLs.
http://www.erlandsendata.no/english/index.php?d=envbadacimportado
http://www.erlandsendata.no/english/index.php?d=envbadacimportdao

Import Unformatted txt file into SQL

I am having an issue importing data into SQL from a text file. Not because I don't know how...but because the formatting is pretty much terrible for this purpose. Below is an altered sample of the types of text files I need to work with:
1 VA - P
2 VB to 1X P
3 VC to 1Y P
4 N - P
5 G to 1G,Frame P
6 Fout to 1G,Frame P
7 Open Breaker P
8 1B to 1X P
9 1C to 1Y P
Test Status: Pass
Hi-Pot # 1500V: Pass
Customer Order:904177-F
Number: G4901626-200
Serial Number: J245F6-2D03856
Catalog #: CBDC37-X5LE30-H40-L630C-4GJ-G31
Operator: TGY
Date: Aug 01, 2013
Start Time: 04:09:26
Finish Time: 04:09:33
The first 9 lines are all specific test results (tab separated), with header information below. My issue is that I need to figure out:
How can I take the data above and turn it into something broken down into a standard column format to import into SQL?
How can I then automate this such that I can loop through an entire folder structure?
-What you see above is one of hundreds of files divided into several sub-directories.
Also note that the # of test lines above the header information vary from file to file. The header information remains in much the same format though. This is all legacy data that cannot be regenerated, but needs to be imported into our SQL databases.
I am thinking of using an SSIS project with a custom script to import the data...splicing the top section from the bottom by looking for the first empty row...then pivot the data in the header into column format...merge...then move on. But I don't write much VB and I'm not sure how to approach that.
I am working in a SQL Server 2008R2 environment with access to BIDS.
Thoughts?
I would start by importing the data as all character into a table with a single field, one record per line. Then, from that table, you can parse each record into the fields and field types appropriate for each line. Hopefully there is a way to figure out what kind of data each line is, whether each file is consistant in order, or the header record indicates information about subsequent lines. From that, the data can be moved to a final (parsing may take more than one pass) table with the data stored in a format that is useable for whatever you need it.
I would first concentrate on getting the data into the database in the least complicated (and least error prone) way possible. Create a table with three columns: filename, line_number and line_data. Plop all of your files into that table and then you can start to think about how to interpret the data. I would probably be looking to use PIVOT, but if different files can have different numbers of fields it may introduce complications.
I would use a different approach and use SSDT/SSIS package to import the data.
Add a script component to read in the text file and convert it to XML. Not hard there many examples on the web. In your script Store the XML you develop into a variable.
Add a data flow
Add an XML Source. In the XML source you can select the XML variable you created and process either group of data present in your file. Here is some information on using the XML Source.
Add destination task to import it to a destination of your choice
This solution assumes your input lines are terminated {CR}{LF}, the normal Windows way.
Tell MSSQL's Import/Export Wizard to import a Flat File; the Format is "Delimited"; the "Text Qualifier" is the {CR}; the "Header Row Delimiter" is the {LF}; and the OutputColumnWidth (in "Advanced") is a bit more than the longest possible line length.
It's simple and it works.
I just used this to import 23 million lines of mixed up data, and it took less than ten minutes. Now to edit it...

SQL Server 2008 - TSQL Read CSV file

I am working on a project that basically entails on importing a CSV file into a SQL Server 2008 R2 database. The CSV file is generated from an Excel file that is populated by a "manager" with PR hours for his employees. This also includes some additional information such as which job and phase the employees were working on and also includes the number of hours for an equipment (if used).
Once you generate a CSV file for that, it's not exactly the usual straighforward "column" based CSV file. It's more like a "row" based CSV file with each row being kind of unique. Due to this caveat involved, I cannot do a straight dump (using BULK insert or OPENROWSET) to SQL, which would essential create a (temp) table with the appropriate column filled data.
I am looking to use the fields within the CSV file based on the "location" of that field in the row.
So, basically the positions of the data will remain the same, since every CSV is based on a TEMPLATE file - so all I have to do is navigate through the CSV file using SQL code to find the right field based on it's position in the ROW. I hope that gives you guys a better understanding of what I am trying to achieve here. Sorry for the long wall of text.
I researched a bit and here's what I have come up with so far:
Reads CSV files into a temp table through a custom SQL function (Reading lines from a file)
https://www.simple-talk.com/sql/t-sql-programming/reading-and-writing-files-in-sql-server-using-t-sql/
This one is interesting. Dumps the whole file as a BLOB and then you can sift through the data.
http://www.mssqltips.com/sqlservertip/1643/using-openrowset-to-read-large-files-into-sql-server/
Finally, this one essential splits out the rows and creates separates records per row. Interesting..
http://ask.sqlservercentral.com/questions/17408/how-to-read-a-text-file.html
If anyone has any suggestions or steps that I could follow to get through this, I would greatly appreciate it.
To the Mods: If I have posted something (especially the links) that shouldn't be here, please feel free to remove it. I apologize if I did.
Thanks much.. Hope to hear some positive responses! :)
Warm Regards,
Pranav
If the file is not too large, another option is to post-process the file in Excel using a VBA macro. Of course, you'd need to come up to speed using the Excel object model and VBA, but the recording function makes it fairly simple. One advantage of the VBA approach is that it seems you really do want to do row by row processing, and VBA is better for that, whereas SQL is better for set-based operations.

Help Importing CSV file with Variable Columns per Row into SQL Table using Import tool or SSIS

I am stuck with a CSV file with over 100,000 rows that contains product images from a provider. Here are the details of the issue, I would really appreciate some tips to help resolve this. Thanks.
The File has 1 Row per product and the following 4 columns.
ID,URL,HEIGHT,WIDTH
example: 1,http://i.img.com,100,200
Problem starts when a product has multiple images.
Instead of having 1 row per image the file has more columns in same row.
example:
1,http://i.img.com,100,200,//i.img.com,20,100,//i.img.com,30,50
Note that only first image has "http://" remaining images start with "//"
There is no telling how many images per product hence no way to tell how many total columns per row or max columns.
How can I import this using SSIS or sql import wizard.
Also I need to do this on regular intervals.
Thank you for your help.
I don't think that you can use any standard SSIS task or wizard to do this. You're going to have to write some custom code which parses each line. You can do this in SSIS using VB code or you can import the file into a staging table that's just a single column to hold each row and do the parsing in SQL. SSIS will probably be faster for this kind of operation.
Another possibility is to preprocess the file using regex or a search-and-replace command. Try to get double-quotes around the image list then you should be able to import the whole file fine, with the quoted part going into a single column. Catching the start of the string should be easy enough given the "http:\" for which you can search. Determining where the end quote goes might be more of a problem.
A third potential solution would be to get the source to fix the data. Even if you can't get the images in separate rows (or another file with separate rows, which would be ideal), maybe you can get the double-quotes added from the source as part of the export. This would likely be less error-prone than using the search-and-replace method.
Good luck!