I’m running Octave 5.1.0
In Matlab you can use webread to get a struct from a web page, like this:
data=webread(urlString);
data is then a struct ready to use. (urlString specifies that the format should be json, but I can also get the data in xml format from this web page).
Can I achieve this in Octave? (I can’t use Matlab for this project).
I tried using
data=urlread(urlString);
data is then in string format. I could use regexp to extract the information I need, but I’m hoping there is an easier way.
I’ll be grateful for any suggestions.
Core Octave does not support JSON reading and writing (yet), but there are a few packages that do so. Try one of them.
https://github.com/apjanke/octave-jsonstuff
https://github.com/Andy1978/octave-rapidjson
https://github.com/fangq/jsonlab
Related
I would like to know if there's any python library that supports this conversion, currently the options i've found are SASpy, csv or SQL database but was unsuccessful.
This is not really a programming question but hope it won't be an issue.
I've found this post:
Export pandas dataframe to SAS sas7bdat format
But was hoping to find any updates on new libraries that support sas7bdat files creation and how licensing works for SASpy.
The sas7bdat is very hard to write. The read is fairly doable (but pretty hard) but the write is brutal. SAS costs a LOT of money and cannot be purchased (it is leased). My suggestions:
Use one of the products by companies that have done it. Some examples: CoyRoc (SSIS adaptor) $, StatTransfer $, SPSS $$$, SAS (lots of dollar signs). WPS might be able to do it but they save to their format to avoid the mess. They probably also support sas7bdat export.
Do not use sas7bdat format. Consider something else like SAS Transport format. Look at my github repository (savian-net) for C# code that can do it. Translate to Python or find a python library that can handle SAS Transport.
The sas7bdat is a binary, proprietary protocol that is 100% not published anywhere. Any docs are guesses based upon binary sleuthing. It is based on an old mainframe format and 'likely remnants' appear to be included. My suggestion is to avoid it like the plague and find an alternative.
An alternative to using xport as Stu suggested - as of Viya 2021.2.6, SAS supports reading externally generated parquet files via the new parquet import engine. As such, you could export the file to parquet via Python then directly import that into SAS and save it as a .sas7bdat file.
https://communities.sas.com/t5/SAS-Communities-Library/Parquet-Support-in-SAS-Compute-Server/ta-p/811733
I just read
How to parse a OFX (Version 1.0.2) file in PHP?
I am not a developer. What easy tool can I use to make this code run with no code skill or appetence ? web browser is pretty hard to use for non dev guys.
I need this to use the file into Power BI, which accept M code, json source or xml, but not sgml ofx or PHP.
Thanks in advance
Welcome Didier to StackOverflow!
I'm going to try and give you a clue how I'd approach the problem here. But keep in mind that your question really lacks details for us to help you, and I'm asking to update your question with example data that you want to integrate into PowerBI. Also, I'm not too familiar with PowerBI nor PHP, and won't go into making that PHP code you linked run for you.
Rather, I'd suggest to convert your OFX file into XML, and then use PowerBI's XML import on that converted file.
From your linked question, I get that your OFX file is in SGML format. There's a program specifically designed to convert SGML into XML (which is just a restricted form of SGML) called osx. I've detailed how to install it on Linux and Mac OS in another question related to SGML-to-XML down-converting; if you're on Windows, you may have luck by just downloading a really ancient (32bit) version of it from ftp://ftp.jclark.com/pub/sp/win32/sp1_3_4.zip. Alternatively, you can use my sgmljs.net software as explained in Converting HTML to XML though that tutorial is really about the much more complex task of converting HTML to XML/XHTML and will probably confuse you.
Anyway, if you manage to install osx, running it on your OFX file (which I assume to have the name yourfile.ofx just for illustration) is just a matter of invoking (on the Windows or Linux/Mac OS command line):
osx yourfile.ofx > yourfile.xml
to result in yourfile.xml which you can attempt to load with PowerBI.
Chances are your OFX file has additional text at the beginning (lines like XYZ:0001 that come before <ofx>). In that case, you can just remove those lines using a text editor before invoking osx on it. Maybe you also need a .dtd file or additional instructions at the top of the OFX file informing SGML about the grammar of your file; it's really difficult to say without seeing actual test data.
Before bothering with SGML and all that, however, I suggest to remove those first few lines in your OFX file (everything until the first < character) and check if PowerBI can already recognize your changed input file as XML (which, from other OFX example files, has a good chance of succeeding). Be sure to work on a copy of your original file rather than overwriting it. Then come back and update your question with your results and example data.
Multiple source systems I want to process using Azure Data Lake contain a carriage return, linefeed within a column.
This causes Extract in ADLA to fail with the following error:
E_RUNTIME_USER_EXTRACT_UNEXPECTED_ROW_DELIMITER
Trying to find a working configuration to not be running into this issue anymore. The native Extractor documentation on Microsoft.com describes this:
Note that the rowDelimiter character inside a quoted string will not
be escaped and will be used as a row separator which will lead to
incorrect or failing extractions.
https://msdn.microsoft.com/en-us/azure/data-lake-analytics/u-sql/extractor-parameters-u-sql
Unfortunately this fails to mention a good workaround.
I tried switching to another format like Orc or Parquet. However, for the time being, these seem not to be fully supported yet. As this limits the functionality of the development environment, I would rather not use these formats for now.
This issue seems highly likely to occur, yet I am unable to find a good solution. What is a good and standard solution to work around this issue while still keeping the convenience of using csv/tsv to store files?
I've accomplished this by creating a custom extractor based on a third party CSV Parser. Specifically, the CsvParser class from Josh Close's fantastic CsvHelper library. Works like a charm. Don't forget to set AtomicFileProcessing = true.
How do I generate SQL code for a SQLite3 database somehow dynamically with something like a template engine? I'm new to this and I'm using jinja2 to generate HTML. Could jinja2 possibly be used for such a purpose as well, if so, is that a good idea? I would like my program to save all the data (attribute values of instances of various classes) before it stops running.
check out https://pypi.python.org/pypi/jinjasql/0.1.6
it seems to basically just apply jinja2 to SQL templates. I haven't played with it much yet but was just asking the same question and came upon this.
I resort to your expertly advice because I am sort of "new" to Objective-C, I have read a couple of books and docs (namely Aaron Hillegass & Stephen G. Kochan's books), but some things are still unclear to me, for lack of practise.
To put you in context, I have a NSDocument project that uses Core Data for storage.
I struggle with 2 things right now: reading/writing to files, and table views ^^
So my first question is about Core Data : is it only able to save in SQL, XML or Binary format ?
Or can I use core data to read/write in any format, according to what I declared in the plist file ?
I am trying to work with .po files, and I want to display the translations in a table view containing 2 columns (1 for the msgid and the other for the msgstr).
To read and write files in the po format and display lines in my table view, I most likely need to parse the files using line endings and characters such as "#"as delimiters.
I haven't gotten around to doing that yet (I have no idea how to do that yet!), but I would like to know if it is possible or if I need to restart my project that doesn't use Core Data...
Please DO NOT just throw links to the apple documentation at me, it's the most confusing thing ever, and feels like it's made for experts only! I need me some human-readable explanations :)
Thanks a bunch for any help and advice you can give me!
It is possible to write a different storage format for Core Data, but it is not easy and it sounds like you are not at a level where that is a possibility (no shame there, I'm not either).
If you are only displaying data from the .po files then there is no need to use CoreData. CoreData is meant to provide a file storage solution. You create/edit data and save it using coredata. If you have no intention to create and edit data then get rid of coredata, it will only get in the way.