Labview historical trend timestamp - labview

I am currently working on a project in which I read some values from a MODBUS device using DSC module.
I declared 3 shared variables and enabled logging for them. I've succeeded to read the values and save them on a Citadel database correctly. The problem is when I use historical trend to show data (by assigning the start and end timestamp), it always shows the result from 1/1/1904.
Any suggestion to solve the problem?

Related

Is there a way to access raw data stored in Youtrack?

In Youtrack reports, you can view the issues by two fields using creation date as y-axis and any other field as x-axis. But when you do that like in this graph you view number of issues that are currently in the state stated in x-axis. For example, if the x-axis is the state, then you will see the current states of the issues that are created in the date intervals of the y-axis. But I also want to see the number of issues in each state in a chronological way. I want to see the states (or some other field) of the issues in May 21, 2021 (not their current states but their states in May 21).
I know that Youtrack keeps the state changes and their dates and many other data like that because in different reports, I can see that the Youtrack uses past data but usually there is no way to download the data of those reports.
I want to access all those raw data. My plan is to create some reports that are not available in Youtrack Reports, using R or Python. Is there a way to access those raw data, or a guideline to access them?
The way to access raw data in YouTrack is through the REST API. For example, you can get the issue's activity data to retrieve the history of changes applied to the issue. This way you can identify how things have changed chronologically.
I can see that the Youtrack uses past data but usually there is no way to download the data of those reports.
Report's data can be accessed via API as well. The report's API endpoint is api/reports, however, it's not documented as it may be subject to change. In this case, we can't guarantee backward compatibility. If you are fine with it, you can still use it. To see the exact request, check the network requests in the browser when loading a report.

How to get data from another system

I need to get some data from one system to another.
So far, I used the function module (FM) RFC_READ_TABLE where I filled all required fields and got data that I needed from another system's table.
I can't use RFC_READ_TABLE in task that I'm working on right now, because of interface agreement. I need to get posting status of invoice; I also found FM BAPI_BILLINGDOC_GETDETAIL, but this FM isn't on development system that I'm working on, but it is on system where confidential data are stored. I tried to google stuff, but I couldn't find good example with getting data from another system.
My question is, how do I get data from another system with FM BAPI_BILLINGDOC_GETDETAIL ?
Ok I found out where was the problem, on our development system there are no BAPI FM and the way how you can get data from another system is that put DESTINATION at the end of FM
CALL FUNCTION 'BAPI_BILLINGDOC_GETDETAIL'
DESTINATION lv_destination "<====== added this line
EXPORTING
...

MS ACCESS TransferSpreadsheet VBA to include extra information in import data

I am building an Access 2010 db which will store and query information relating to time spend by users in our team. Part of the reporting needs to include whether timesheets have been submitted on time.
The process is currently being managed in Excel but is becoming cumbersome due to the growing size of the consolidated data. In the current process, the flag on whether someone is late with their timesheet is applied manually.
Instead of manually adding a Yes / No value to the excel data, I wondered whether it was possible to set up separate TransferSpreadsheet processes in Access to upload the excel data (and attach them to separate command buttons) such that, depending on which one is executed, the import process adds a Yes or a No value to the last column in the data as it's being uploaded.
That way we can import the excel data for those who submitted their timesheets on time (and 'stamp' them Yes for being on time) and then any subsequently late submitted timesheet data can be imported later (and 'stamped' with a No).
I have spent several hours looking at online forums and instruction pages but cannot find anything close to what I am trying to achieve, hence the reason for posting this here.
This is just one of the options I am considering but my VBA skills are insufficient to establish whether such a process could be handled in VBA. All help appreciated. Thanks.
Solved this one myself with a bit of perseverance. Ended up running a few DoCmd.RunSQL commands to Alter / Delete / Insert the tables I had and used a 'join' table to load the data from excel and then ran a command to append the data from the 'join' table to the main table. I just invoke slightly different commands to update the table field based on whether the data has been submitted late or on time.

Creating a test-data container in Azure blob storage

I'm adding some testing to my current project which uses Azure blob storage to store telemetry data coming from a stream analytics job. I want to do testing of the routines that get the telemetry data, so I created a separate container for test data. I downloaded a sample set of data, modified the data to serve my needs and re-uploaded (using Azure storage explorer) everything back into the new container.
The tests were immediately failing and I quickly found out that this is because the LastModified date of the files changed into the date/time of upload. This is fine, but the sequence of the upload was also different. My code uses the modified date of the file to find out which one is the most recent, which would now return a different file based on the new dates.
I found that you cannot modify this property, although you can change another property to have it update. So I know the solution: I could write a quick script which gets the sequence of files from my production instance and then touches every file in the test instance in the same sequence.
But... I was wondering whether this is the best option. I also read it's 'best practice' to store a custom datetime in a separate property, but I don't think I can do that straight from Stream Analytics (which is writing the blobs). I also considered using an Azure Function to do this (new blob => update property), but I'm than adding complexity and something that might fail for whatever reason.
So I'm looking for the best way to solve this problem. Anyone?
Update: this one probably deserves a tiny bit more explanation. Apart from using the LastModified date to sort on, I also use it to filter blobs. The blobs themselves are CSV files containing ASA output data, so telemetry records. Each record has a timestamp, but that information is IN the file. When retrieving data, I don't want to have to dive into each file to find out what the timestamp is of those records. So I use a prefilter to filter out the blobs within a certain timespan, and then only download / open those file to the records inside.
This works perfectly as long as you do not touch any of the blob, but obviously it stops working as soon as any of the blobs gets modified for whatever reason. So I'm now convinced that I need a different / better way to solve this issue; but how?
It seems to me that you have two separate things: the data that you want to store in blob storage and metadata about the blob such as the timestamp. I would create a different (azure) database for the metadata or even simpler just add metadata to the (block)blob:
blockBlob.Metadata.Add("from", dateTime.ToString());
blockBlob.Metadata.Add("to", dateTime.ToString());
blockBlob.Metadata.Add("order", "1");
For sorting I would just add a simple order property.
The comment by #Vignesh deserves the credit here, but in order to get this one marked answer I'll provide it myself.
With ASA, you can set the output to be structured by date/time. That means in this case, data is written to the blob store with a directory structure such as:
2016 / 06 / 27 / 15 / 23 (= 27-06-2016 15:23)
2016 / 06 / 28 / 11 / 02 (= 28-06-2016 11:02)
The ASA output allow you to specify how granular you want the structure to be, in my case I chose to store it by day (so not including a time path). The ASA runtime will now ensure that data from a certain point in time is stored within a blob in that resides in the correct path.
Then I subsequently changed my logic to not use the datetime stamp of the individual blob files any more, but simply read just the files from the folders that are within the timerange I'm interested in. That assures we only get data that was produced within that timerange. And if there's more than one file in a folder, I need to load them both since both were in the same timerange anyway. As long as minutes are enough granularity for you, this works excellent even though it might feel a bit strange to use a folder structure for such a thing.
Having a seperate 'index' for blobs which tracks their datetime would work too of course, but adds complexity which in this case I don't really need.

Problems with BigQuery and Cloud SQL in same project

So, we have this one project which uses Cloud Storage and BigQuery as services. All has been well.
Then, I wanted to add Cloud SQL to this project to try it out. It asked for a unique Project ID so I gave it one. (The Project ID is different than the Project Number.)
Ever since then, I've been having a difficult time accessing my BigQuery tables. When I go to the BigQuery web interface, the URL contains the Project ID instead of the original Project Number. It shows the list of datasets, but now shows the Project Number before each dataset name and the datasets are greyed out and inaccessible. If I manually change the URL to contain the Project Number instead of the Project ID, it appears to work although it shows the list of datasets in the left nav twice, one set greyed out and inaccessible and the other set seemingly accessible.
At the same time, some code that I've been successfully using in Apps Script that accesses BigQuery is now regularly failing with a generic "We're sorry, a server error occurred. Please wait a bit and try again." I'm not sure if this is related to the Project ID/Project Number confusion, or if it's just a Red Herring.
Since we actively use the Cloud Storage service of this project, I am trying to be cautious with further experimentation with this project. I'm not sure if I should delete the Cloud SQL service in this project to get it back to the way it was, or if this is a known issue with some back-end solution. Please advise.
After setting the project id, there can be a delay where BigQuery picks up the change. It should happen within 15 minutes or so, but sometimes it takes longer.
If you send the project ID I can make sure it has been updated.