Date filter on One Drive Graph API not working - onedrive

I have following date filter query on one drive with greater than operator.
lastModifiedDateTime gt 2020-08-29T10:48:41.000Z
https://graph.microsoft.com/v1.0/me/drive/root/search(q='test')?filter=lastModifiedDateTime%20gt%202020-08-29T10%3A48%3A41.000Z
But it always returns empty result. I have following file detail displaying on one drive.
Type: SRT File
Modified: 8/29/2020 04:40 PM
Added: by me
Date created: 8/29/2020 04:39 PM
Path: My file > stest
Size: 101 KB
Just additional info. I am able to access all files using following url.
https://graph.microsoft.com/v1.0/me/drive/root/children

Related

Amazon CloudWatch log agent ignoring first character of log lines

If I add a batch of n test records to my log file, the awslogs agent is erroneously ignoring the first character of the first line of the batch of rows that I add. This is fully reproducible. So if I use vi to append the following test log lines:
2017-06-16 15:20:01,123 this line will not get correctly parsed. first character in date string will be skipped.
2017-06-16 15:21:23,456 this will get parsed. 1
2017-06-16 15:22:23,456 this will get parsed. 2
2017-06-16 15:23:23,456 this will get parsed. 3
2017-06-16 15:24:23,456 this will get parsed. 4
2017-06-16 15:25:23,456 this will get parsed. 5
2017-06-16 15:26:23,456 this will get parsed. 6
2017-06-16 15:27:23,456 this will get parsed. 7
The leading 2 in the first row gets omitted by the log agent. In the CloudWatch Logs web console, the event shows up as 017-06-16 15:20:01,123 this line will..., the datetime string does not get successfully parsed, and the log event must use the timestamp of the previous log.
In the common scenario where I add log events to the file one at a time, the first letter of each line is ignored and the timestamp strings do not get correctly parsed. If I append multiple lines in vi before hitting :w save, only the first line experiences this problem and the other lines in the batch get ingested correctly.
I created the log file (as a test) with touch and have only added lines manually with vi so I don't think this is a file encoding problem.
I'm using a mostly standard default configuration.
My CloudWatch Agent Config File:
[general]
state_file = /var/awslogs/state/agent-state
[/var/log/myapp/app.log]
file = /var/log/myapp/app.log
log_group_name = MyAppLogGroup
log_stream_name = MyAppLogStream
datetime_format=%Y-%m-%d %H:%M:%S,%f
Then I download the latest setup script from https://s3.amazonaws.com//aws-cloudwatch/downloads/latest/awslogs-agent-setup.py
And run sudo ./awslogs-agent-setup.py -n -r us-west-2 -c cloudwatch_logs.config
Try setting the
initial_position = start_of_file
option in your config file explicitly, do you get the same behavior?

Bigquery create table (native or external) link to Google cloud storage

I have some files uploaded to Google Cloud Storage (csv and json).
I could create BigQuery tables, native or external, linking to these files in Google Cloud Storage.
In the process of creating bigquery tables, I could check "Schema Automatically detect".
The "Schema Automatically detect" works well with json new line delimited format file. But with the csv file, first row is the 'column name", bigquery cannot do the "schema automatically detect", it treats the first line as data, and then the schema bigquery created will be string_field_1, string_field_2 etc.
Are there anything that I need to do for my csv file that makes bigquery "Schema Automatically detect" works?
The csv file I have is "Microsoft Excel Comma Separated Value File".
Update:
If first column is empty, BigQuery autodetect doesn't detect headers
custom id,asset id,related isrc,iswc,title,hfa song code,writers,match policy,publisher name,sync ownership share,sync ownership territory,sync ownership restriction
,A123,,,Medley of very old Viennese songs,,,,,,,
,A234,,,Suite de pièces No. 3 en Ré Mineur HWV 428 - Allemande,,,,,,,
But if first column is not empty - it is OK:
custom id,asset id,related isrc,iswc,title,hfa song code,writers,match policy,publisher name,sync ownership share,sync ownership territory,sync ownership restriction
1,A123,,,Medley of very old Viennese songs,,,,,,,
2,A234,,,Suite de pièces No. 3 en Ré Mineur HWV 428 - Allemande,,,,,,,
Should it be a feature improvement request for BigQuery?
CSV autodetect does detect header line in CSV files, so there must be something special about your data. It would be good if you can provide the real data snippet and the actual commands you used. Here is my example that demonstrates how it works:
~$ cat > /tmp/people.csv
Id,Name,DOB
1,Bill Gates,1955-10-28
2,Larry Page,1973-03-26
3,Mark Zuckerberg,1984-05-14
~$ bq load --source_format=CSV --autodetect dataset.people /tmp/people.csv
Upload complete.
Waiting on bqjob_r33dc9ca5653c4312_0000015af95f6209_1 ... (2s) Current status: DONE
~$ bq show dataset.people
Table project:dataset.people
Last modified Schema Total Rows Total Bytes Expiration Labels
----------------- ----------------- ------------ ------------- ------------ --------
22 Mar 21:14:27 |- Id: integer 3 89
|- Name: string
|- DOB: date
custom id,asset id,related isrc,iswc,title,hfa song code,writers,match policy,publisher name,sync ownership share,sync ownership territory,sync ownership restriction
,A123,,,Medley of very old Viennese songs,,,,,,,
,A234,,,Suite de pièces No. 3 en Ré Mineur HWV 428 - Allemande,,,,,,,
If the first column is empty, Google BigQuery cannot detect the schema.
custom id,asset id,related isrc,iswc,title,hfa song code,writers,match policy,publisher name,sync ownership share,sync ownership territory,sync ownership restriction
1,A123,,,Medley of very old Viennese songs,,,,,,,
2,A234,,,Suite de pièces No. 3 en Ré Mineur HWV 428 - Allemande,,,,,,,
If I add the value to the first column, then Google BigQuery can detect the schema.
Should it be a feature improvement request for BigQuery?

unable to load csv file from GCS into bigquery

I am unable to load 500mb csv file from google cloud storage to big query but i got this error
Errors:
Too many errors encountered. (error code: invalid)
Job ID xxxx-xxxx-xxxx:bquijob_59e9ec3a_155fe16096e
Start Time Jul 18, 2016, 6:28:27 PM
End Time Jul 18, 2016, 6:28:28 PM
Destination Table xxxx-xxxx-xxxx:DEV.VIS24_2014_TO_2017
Write Preference Write if empty
Source Format CSV
Delimiter ,
Skip Leading Rows 1
Source URI gs://xxxx-xxxx-xxxx-dev/VIS24 2014 to 2017.csv.gz
I have gzipped 500mb csv file to csv.gz to upload to GCS.Please help me to solve this issue
The internal details for your job show that there was an error reading the row #1 of your CSV file. You'll need to investigate further, but it could be that you have a header row that doesn't conform to the schema of the rest of the file, so we're trying to parse a string in the header as an integer or boolean or something like that. You can set the skipLeadingRows property to skip such a row.
Other than that, I'd check that the first row of your data matches the schema you're attempting to import with.
Also, the error message you received is unfortunately very unhelpful, so I've filed a bug internally to make the error you received in this case more helpful.

How to Reset Date/Time M500 Sport DV Camera?

I recently bought a M500 Sport DV Cam. I am unable to reset/change Date and Time. According to Manual, Cam will create SportDV.txt file in SDCard and we can change Date Time from SportDV.txt file.
But My Cam is not creating any SportDV.txt file. It only creates Two folders Data (which contains an empty base.dat file) and DCIM (Which contains videos and Images).
I tried to create file Manually, but It doesn't change Date/Time. I also tried different methods like creating files with name times.txt, time.txt, timeset.txt, tag.txt, settime.txt but nothing works.
I am unable to change Date and Time. It always shows Year 2158 instead of 2015.
Sample Date: 2158/8/14 22:10:22
I tried everything and failed. But I found the solution.
Open Notepad and Copy & Paste
SPORTS DV
UPDATE:N
FORMAT
EV:6
CTST:100
SAT:100
AWB:0
SHARPNESS:100
AudioVol:1
QUALITY:0
LIGHTFREQ:0
AE:0
RTCDisplay:1
year:2014
month:7
date:7
hour:16
minute:11
second:0
-------------------------------
Exposure(EV)
0 ~ 12, def:6
Contrast(CTST)
1 ~ 200, def:100
Saturation(SAT)
1 ~ 200, def:100
White Balance(AWB)
0 ~ 3, def:0, 0(auto), 1(Daylight), 2(Cloudy), 3(Fluorescent)
Sharpness
1 ~ 200, def:100
AudioVol
0 ~ 2, def:1, 0:Max 1:Mid 2:Min
QUALITY
0 ~ 2, def:0, 0:High 1:Middle 2:Low
LIGHTFREQ
0 ~ 1, def:0, 0:60Hz 1:50Hz
AUTO EXPOSURE(AE)
0 ~ 2, def:0, 0:Average 1:Center 2:Spot
RTCDisplay
0 ~ 1, def:1, 0:Off 1:On
year
2012 - 2038, def:2013
month
01 - 12, def:1
date
01 - 31, def:1
hour
00 - 23, def:0
minute
01 - 59, def:0
second
01 - 59, def:0
Set Update:N to Update:Y,
Change year, month, date ,
and save the file with the name SportDV and Encoding to UTF-8
For versions that have a time.bat file putting a N at the end of the timestamp in the time.txt file removes the timestamp from the video, ie time.txt:
2015.11.13 20:13:31 N
i have the more recent version of the m500 mini camera that doesnt use the sportdv.txt file
It looks same physically as earlier one, same leds, same decals but it instead after being reset has a time.bat file in the root of the card. executing this on a windows machine produced a file called time.txt except the format of this batch file doesnt work,
i edited the time.txt file and restated the camera and it worked after following andys format from his posting on the dx.com site
choose edit and then make sure you replace the (probably nonsense format) contents with 2015.11.13 20:13:31 - in this case that's YYYY.MM.DD HH:MM:SS click save. turn off/eject the camera. Power up now not connected to PC and make a short capture. Now when you check the content the date/time will hopefully be right?
afaik there is no updated firmware for this version of the camera to change from 3 min files or hide the time/date text :-(

UniVerse - SQL LIST: View List of All Database Tables

I am trying to obtain a list of all the DB Tables that will give me visibility on what tables I may need to JOIN for running SQL scripts.
For example, in TCL when I run "LIST.DICT" it returns "Name of File:" for input. I then enter "PRODUCT" and it returns a list of all available fields.
However, Where can I get a list of all my available Tables or list of my options that I can enter after "Name of File:"?
Here is what I am trying to achieve. In the screen shot below, I would like to run a SQL script that gives me the latest Log File Activity, Date - Time - Description. I would like the script to return '8/13/14 08:40am BR: 3;BuyPkg'
Thank you in advance for your help.
From TCL within the database account containing your database files, type: LISTF
Sample output:
FILES in your vocabulary 03:21:38pm 29 Jun 2015 Page 1
Filename........................... Pathname...................... Type Modulo
File - Contains all logical device names
DICT &DEVICE& /u1/uv/D_&DEVICE& 2 1
DATA &DEVICE& /u1/uv/&DEVICE& 2 3
File - Used by MAKE.MAP.FILE
DICT &MAP& /u1/uv/D_&MAP& 2 1
DATA &MAP& /u1/uv/&MAP& 6 7
File - Contains all parts of Distributed Files
DICT &PARTFILES& /u1/uv/D_&PARTFILES& 2 1
DATA &PARTFILES& /u1/uv/&PARTFILES& 18 7
DICT &PH& D_&PH& 3 1
DATA &PH& &PH& 1
DICT &SAVEDLISTS& D_&SAVEDLISTS& 3 1
DATA &SAVEDLISTS& &SAVEDLISTS& 1
File - Used by uniVerse to access the current directory.
DICT &UFD& /u1/uv/D_UFD 2 1
DATA &UFD& . 19 1
DICT &XML& D_&XML& 18 3
DATA &XML& &XML& 19 1
Firstly, UniVerse has no Log File Activity Date and Time.
However, you can still obtain the table's modified/ accessed date from the file system however.
To do this,
You need to have a subroutine accepting a path of the table to return a date or a time.
e.g. SUBROUTINE GET.FILE.MOD.DATE(DAT.MOD, S.FILE.PATH)
Inside the subroutine, you can use EXECUTE to run shell command like istat for getting these info on a unix e.g.
Please beware that for a dynamic file e.g. there are Data and Overflow parts under a directory. You should compare the dates obtained and return only the latest one.
Globally catalog the subroutine
Create an I-Desc in VOC, e.g. I.FILE.MOD.DATE in the field definition of this I-Desc: SUBR("*GET.FILE.MOD.DATE",F2) and Conv Code as "D/MDY2"
Create another I-Desc e.g. I.FILE.MOD.TIME
Finally, you can
LIST VOC I.FILE.MOD.DATE I.FILE.MOD.TIME DESC WITH TYPE LIKE "F..."
alternatively in SQL,
SELECT I.FILE.MOD.DATE, I.FILE.MOD.TIME, VOC.DESC FROM VOC WHERE TYPE LIKE "F%";