Dbc File Format Speed Issue - embedded

I have a sensor that can give me a message. I want to transfer this data via can. My receiver wants me from a .dbc file. (Database for can). In my local(with PCAN) i can send a data in 1 miliseceond.
But after i put the can cable to my receiver it takes a data in 1 second. I think the problem is in .dbc file. Is there any definition to config data rate in .dbc file ?

Related

Reading *.cdpg file with python without knowing structure

I am trying to use python to read a .cdpg file. It was generated by the labview code. I do not have access to any information about the structure of the file. Using another post I have had some success, but the numbers are not making any sense. I do not know if my code is wrong or if my interpretation of the data is wrong.
The code I am using is:
import struct
with open(file, mode='rb') as file: # b is important -> binary
fileContent = file.read()
ints = struct.unpack("i" * ((len(fileContent) -24) // 4), fileContent[20:-4])
print(ints)
The file is located here. Any guidance would be greatly appreciated.
Thank you,
T
According to the documentation here https://www.ni.com/pl-pl/support/documentation/supplemental/12/logging-data-with-national-instruments-citadel.html
The .cdpg files contain trace data. Citadel stores data in a
compressed format; therefore, you cannot read and extract data from
these files directly. You must use the Citadel API in the DSC Module
or the Historical Data Viewer to access trace data. Refer to the
Citadel Operations section for more information about retrieving data
from a Citadel database.
.cdpg is a closed format containing compressed data. You won't be able to interpret them properly not knowing the file format structure. You can read the raw binary content and this is what you're actually doing with your example Python code

Rename filename.ext.crswap to filename.ext rather than copying

When performing this sequence
Obtain a handle to a new file via window.showSaveFilePicker, say filename.ext
Obtain a writeable file stream from the handle
Write some content into the file using the stream
close the stream to signal completion
the File System API writes to filename.ext.crswap and on close copies filename.ext.crswap to filename.ext
Is there a reason that filename.ext.crswap is not rather renamed to filename.ext?
The reason for this behavior is to avoid partial writes:
"User agents try to ensure that no partial writes happen, i.e. the file represented by fileHandle will either contain its old contents or it will contain whatever data was written through stream up until the stream has been closed."—Spec.

OPEN_PIPE_NO_AUTHORITY upon opening non-existing file using OPEN DATASET FOR OUTPUT IN BINARY MODE without FILTER

I have a very strange problem.
I have a standard program with the following piece of code that tries to create a file in response to a previous attempt to open it with OPEN DATASET ... FOR INPUT IN BINARY MODE.
CATCH SYSTEM-EXCEPTIONS dataset_too_many_files = 6
open_dataset_no_authority = 7
open_pipe_no_authority = 8
dataset_no_pipe = 9.
OPEN DATASET filename FOR OUTPUT IN BINARY MODE
MESSAGE msg.
ENDCATCH.
Suprisingly the response to that is sy-subrc = 8 which according to SAP documentation can happen only when OPEN DATASET is used with FILTER.
The message in msg variable has that File could not be opened, which is irrelevant because we are trying to create this file.
Did anybody experienced something like that? I suppose it has something to do with authority to create a file in a given directory on the operating system level but I cannot find any other log or trace to that. The error message and sy-subrc = 8 seem to be actually misleading in this case. Could more pieces of information be seen by activated tracing in ST01?
It turned out that the cause of the problem was in the first place the lack of the directory in which the file should be created. No wonder the system could not create the file in a non-existing folder. The error message is in such a case a misleading one anyway.
Open Dataset Docu:
and
Open datset os additions
Suprisingly the response to that is sy-subrc = 8 which according to SAP documentation can happen only when OPEN DATASET is used with FILTER.
Not exactly what the docu says. Worth another look.
Ie Would add sy-subrc = 8 on the open dataset command means
The operating system could not open the file.

How to get a RAW16 from CX3

This is my data flow for my system:
Because i can not found a demo to config a raw16, and i did not found the enum type "enum CyU3PMipicsiDataFormat_t " which not contain a RAW16type,
so i did't known how to transfer my raw16 data to the host.
I try to use the yuv422 configuration to transfer my raw data to the host, and i really received data from the CX3 by e-cam, but the image is wrong for the e-cam use the yuv2 formating to resolve the raw data. And now I think i can use the matlab to grap a frame and deal with it. But when i use the matlab getting a snashot and i found the data is a
type like this: 1280*800*3(full frame size:1280x800). Is it the matlab regard as a yuv data? and how can i config the cx3 to support raw16 or how to deal with the data i grap from the cx3 with the yuv format transfer.
Is there any other developer meet the requirement like me?

How to handle file inputs with changing schemas in Talend

Questions: How do I continue to process files that differ substantially from a base schema and that trigger tSchemaComplianceCheck errors?
Background
Suppose I have a folder with Customer xls files called file1,file2,....file1000. Assume I have imported the file schema into Talend repository and called it 6Columns and I have the talend job configured to iterate through each of the files and process them
1-tFileInput ->2-tSchemaCompliance-6Columns -> 3-tMap ->4-FurtherProcessing
Read each excel file
Compare it to the schema 6Columns
Format the output (rename columns)
Take the collection of Customer data and process it more
While processing I notice that the schema compliance is generating errors (errorCode 16) which points to a number of files (200) with a different schema 13Columns but there isn't a way to identify the files in advance to filter then into a subjob
How do I amend my processing to correctly integrate the files with 13Columnsschema into the process (whats the recommended way of handling) and designing incase other schema changes occur
1-tFileInput ->2-tSchemaCompliance-6Columns -> 3-tMap ->4-FurtherProcessing
|
|Reject Flow (ErrorCode 16)
|Schema-13Columns
|
|-> ??
Current Thinking When ErrorCode 16 detected
Option 1 Parallel. Take the file path for the current file and process it against 13Columns using a new FileInput before merging the 2 flows back into 1
Option 2 Serial. Collect the list of files that triggered the error and process them after I've finished with the compliance files?
You could try something like below :
tFileList - Read your input repository
tFileInput "schema6" - tSchemaComplianceCheck : read files as 6-columns schema
tMap_1 : further processing
In the reject part :
tMap after reject link : add a new column containing the filepath that has been rejected
tFlowToIterate : used to get an iterate link, acceptable input for tFileInputDelimited that follows.
tFileInput : read data as 13-columns schema. Following components are the same as in part 1.
After that, you can push your data to tHashOutput, in order to read them further in another subjob.