Problems uploading data from CSV autodesk forge - data-visualization

Good Morning,
I'm using Forge Autodesk Data visualization API. I'm trying to upload the CSV data that is exactly in this format https://github.com/Autodesk-Forge/forge-dataviz-iot-reference-app/blob/main/server/gateways/csv/Hyperion-1.csv, but what i get is internal error 500
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
Hyperion.Data.Adapter.js?73b8:543
SyntaxError: Unexpected token s in JSON at position 0 Hyperion.Data.Adapter.js?73b8:543
eval # Hyperion.Data.Adapter.js?73b8:543
It could be tha the problem is in the format of the csv file? this are my enviromental variables setted:
ADAPTER_TYPE= csv
CSV_MODEL_JSON=server\gateways\synthetic-data\device-models.json
CSV_DEVICE_JSON=server\gateways\synthetic-data\devices.json
CSV_FOLDER=server\gateways\csv
CSV_DATA_START= #Format: YYYY-MM-DDTHH:MM:SS.000Z
CSV_DATA_END= #Format: YYYY-MM-DDTHH:MM:SS.000Z
CSV_DELIMITER="\t"
CSV_LINE_BREAK="\n"
CSV_TIMESTAMP_COLUMN="time"
CSV_FILE_EXTENSION=".csv"
this is the code i'm using https://github.com/Autodesk-Forge/forge-dataviz-iot-reference-app

I have just answered a similar question here: Setting up a CSV Data Adapter locally. When you follow the steps listed in the answer the sample app should read the CSV data without problems.

Related

Reading *.cdpg file with python without knowing structure

I am trying to use python to read a .cdpg file. It was generated by the labview code. I do not have access to any information about the structure of the file. Using another post I have had some success, but the numbers are not making any sense. I do not know if my code is wrong or if my interpretation of the data is wrong.
The code I am using is:
import struct
with open(file, mode='rb') as file: # b is important -> binary
fileContent = file.read()
ints = struct.unpack("i" * ((len(fileContent) -24) // 4), fileContent[20:-4])
print(ints)
The file is located here. Any guidance would be greatly appreciated.
Thank you,
T
According to the documentation here https://www.ni.com/pl-pl/support/documentation/supplemental/12/logging-data-with-national-instruments-citadel.html
The .cdpg files contain trace data. Citadel stores data in a
compressed format; therefore, you cannot read and extract data from
these files directly. You must use the Citadel API in the DSC Module
or the Historical Data Viewer to access trace data. Refer to the
Citadel Operations section for more information about retrieving data
from a Citadel database.
.cdpg is a closed format containing compressed data. You won't be able to interpret them properly not knowing the file format structure. You can read the raw binary content and this is what you're actually doing with your example Python code

TFTransform ValueError: "Split does not exist over all example artifacts"

I'm attempting to construct a TFX pipeline, but keep running into an error during the TFTransform component stem. After diving into the error message and its code on GitHub, it appears to have something to do with a function def get_split_uris(). From what I can glean, there is a mismatch between the number of Artifacts being consumed by this function during runtime and the number of URIs being retrieved and being matched back to this list.
It's odd because my CSVExampleGen() function doesn't seem to have any problems ingesting my original data set that's already split into two CSV files: 'target' and 'candidate'. I cannot find any documentation on this error on the TFX website so my apologies for not having more information.
I can provide additional details if needed.

Loading DICOM from zip archive

I am trying to load DICOMs from a DICOM Server. Loading a single file with the URL is working fine.
Now I want to load a whole series of DICOM Data. I get the data from the server with an HTTP-request as a zip archive.
I have tried to unzip the response with the zip.js library and pass the unziped text to the loader.parse function, to load the DICOMs as in the example "viewers_upload". But I get the error that the file could not be parsed.
Is there a way to load the data without the URL? Or how do I have to modify the example so that it will work for a zip archive?
This is the code from unzipping the file and passing it to the parser:
reader.getEntries(function(entries) {
if (entries.length) {
//getting one entry from the zipfile
entries[0].getData(new zip.ArrayBufferWriter(), function (dicom) {
loader.parse({url: "dicomName", dicom});
} , function (current, total) {
});
}
The error message is:
"dicomParser.readFixedString: attempt to read past end of buffer"
"Uncaught (in promise) parsers.dicom could not parse the file"
I think the problem might be with the returned datatype of the zipfile? Which type do I have to pass to the parse function? How has the structure of the data in the parser has to be? What length of the buffer does the parser expect?

how to get more info than generic "Failed to parse JSON: No active field found.; ParsedString returned false; Could not parse value" on BigQuery load?

We're trying BigQuery for the first time, with data extracted from mongo in json format. I kept getting this generic parse error upon loading the file. But then I tried a smaller subset of the file, 20 records, and it loaded fine. This tells me it's not the general structure of the file, which I had originally thought was the problem. Is there any way to get more info on the parse error, such as the string of the record that it's trying to parse when it has this error?
I also tried using the max errors field, but that didn't work either.
This was via the website. I also tried it via the Google Cloud SDK command line 'bq load...' and got the same error.
This error is most likely caused by some of the JSON records not compying with table schema. It is not clear whether you used schema autodetect feature, or you are supplying schema for the load. But here is one example where such error could happen:
{ "a" : "1" }
{ "a" : { "b" : "2" } }
If you only have a few of these and they are for invalid records - you can automatically ignore them by using max_bad_records option for load job. More details at: https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-json

google big query: export table to own bucket results in unexpected error

I'am stuck trying to export a table to my google cloud storage bucket.
Example job id: job_0463426872a645bea8157604780d060d
I tried the cloud storage target with alot of different variations, all reveal the same error. If I try to copy the natality report, it works.
What am I doing wrong?
Thanks!
Daniel
It looks like the error says:
"Table too large to be exported to a single file. Specify a uri including a * to shard export." Try switching the destination URI to something like gs://foo/bar/baz*
Specify the file extension along with the pattern. Example
gs://foo/bar/baz*.gz in case of GZIP (compressed)
gs://foo/bar/baz*.csv in case of csv (uncompressed)
The foo directory is the bucket name and bar directory can be your
date in string format which could be generated on the fly.
I was able to do it with:
bq extract --destination_format=NEWLINE_DELIMITED_JSON myproject:mydataset.mypartition gs://mybucket/mydataset/mypartition/{*}.json