I have .sql file, I want to convert it to NoSQL, as I have a coursework on MongoDB.
What application can I use or how can I do it?
In a quick Google search, I found this website that converts CREATE and INSERT INTO statements to a JSON or Javascript format. However, if you want to create a different database structure (which I would probably recommend), you might want to program a Python script to create a JSON file to import to MongoDB. I guess it all depends on what you want to create.
Related
I would like to know if there's any python library that supports this conversion, currently the options i've found are SASpy, csv or SQL database but was unsuccessful.
This is not really a programming question but hope it won't be an issue.
I've found this post:
Export pandas dataframe to SAS sas7bdat format
But was hoping to find any updates on new libraries that support sas7bdat files creation and how licensing works for SASpy.
The sas7bdat is very hard to write. The read is fairly doable (but pretty hard) but the write is brutal. SAS costs a LOT of money and cannot be purchased (it is leased). My suggestions:
Use one of the products by companies that have done it. Some examples: CoyRoc (SSIS adaptor) $, StatTransfer $, SPSS $$$, SAS (lots of dollar signs). WPS might be able to do it but they save to their format to avoid the mess. They probably also support sas7bdat export.
Do not use sas7bdat format. Consider something else like SAS Transport format. Look at my github repository (savian-net) for C# code that can do it. Translate to Python or find a python library that can handle SAS Transport.
The sas7bdat is a binary, proprietary protocol that is 100% not published anywhere. Any docs are guesses based upon binary sleuthing. It is based on an old mainframe format and 'likely remnants' appear to be included. My suggestion is to avoid it like the plague and find an alternative.
An alternative to using xport as Stu suggested - as of Viya 2021.2.6, SAS supports reading externally generated parquet files via the new parquet import engine. As such, you could export the file to parquet via Python then directly import that into SAS and save it as a .sas7bdat file.
https://communities.sas.com/t5/SAS-Communities-Library/Parquet-Support-in-SAS-Compute-Server/ta-p/811733
Okay guys, I've been having this problem for a few weeks now and I'm getting no-where with it. I have OpenOffice and regular Office softwares. Both produce flawed .csv files, or at least phpMyAdmin can't read neither of these. Yes, I've been trying to change server's settings of uploading, etc. I also tried to contact my web hosting service and they claimed that all the .csv files I've produced are flawed.
Anyway, I'm looking for a way to convert .xls table to SQL. Most of the softwares out there cost money that I don't have. Furthermore, I've seen PHP systems that do just that, so I know this is possible.
No need converted to. sql, you can import directly with phpmyadmin or using tools like navicat for mysql in phpmyadmin go to the option to import, find the file, select the file type (csv or csv loaddata), in part below defines the column separator (if you do not know which opens the file with notepad)
if a very large file using navicat.
Flawed is "defective"?. I assume you have problem with excel, maybe you have defined the same column separator for separating thousands or decimals, use openoffice to open the file
I'm trying to import fields from a fill-able PDF into a sql databse.
I can't seem to find an answer online:
What's the best way to import/read data from pdf files?
Insert a PDF file into Core Data?'
http://www.utteraccess.com/forum/Import-Fillable-Pfd-Data-t1971535.html
So I'm wondering does anyone know how to extract data from a fill-able PDF into a database(or excel from which it can be imported into a database)
Thanks
Data from fillable pdf's can be exported into an .FDF file, which is a text file. pdftk is a command-line utility that will allow you to extract the data programmatically. You will then need to write a custom parser to pull the data out of the .FDF file.
It won't be a lot of fun, but it should be do-able.
You can use pdftk. I used it and it's great works like a charm. Lot's of coding though. You can get back at me if you need any help
I need to bulk load huge xml files to SQL Server 2005. I decided to use SQLXMLBULKLOAD in my C# app, but I need to get valid xsd-schemas of those xml files to load them. Which is best way to generate xsd file?
I tried MS VS xsd.exe, but it tries to load the file into memory, which causes OutOfMemory exception.
Thanks!
Strip the file down to create a smaller one that is representative of the whole, then generate an XSD from that. You can then tailor the result if necessary.
There are quite a few tools to generate schemas from instances, but I don't know how many of them are able to operate in pure streaming mode. One tool which will work regardless of the file size is the DTDGenerator that was originally part of Saxon; you can find it here:
http://saxon.sourceforge.net/dtdgen.html
It produces a DTD rather than a schema, but there are plenty of tools available to convert a DTD to a schema.
I would like to play with Stack Overflow's data dump in Oracle. The format that they gave me is in XML and it is very very huge (one XML file is about 3GB). I would like to do an import of this data to my Oracle DB. I know one other guy in this topic managed to work on it using the XML directly. Any ideas or suggestions to make this happen easily?
Check out the groovy SQL and XML libraries--you should be able to get up and running pretty quick even with minimal Java/Groovy experience.
http://docs.codehaus.org/display/GROOVY/Tutorial+6+-+Groovy+SQL
Groovy XML
You'll need to install groovy and get the ojdbc14.jar drivers from Oracle. Put your code in a file and run:
groovy -cp ojdbc14.jar myscript.groovy