Consistency of snapshot code in rtc? - rtc

A snapshot named snapshot 1 is created for a Stream on Jan 1 2017 (For example)
Later I create a stream out of snapshot 1 and do some code delivery. (On Feb 2017)
After a month, I create another stream out a snapshot 1. (March 2017)
Will the snapshot 1 have the same code as created on Jan 1 2017 ?

Will the snapshot 1 have the same code as created on Jan 1 2017 ?
Yes, that is the idea behind a snapshot: it is immutable... unless tou delete it (which is possible since RTC4.0) and recreate one with the same name.
But if you have not deleted it, a snapshot content never change.

Related

Redis RDB occurs even with AOF and Snapshotting turned off (and possibly leads to complete key loss)

We're running Redis 5.0.3 on docker, with both saving and AOF turned off:
127.0.0.1:6379> config get save
1) "save"
2) ""
127.0.0.1:6379> config get appendonly
1) "appendonly"
2) "no"
Everything runs fine (no backups in the logs), until this morning when we got several DB backup logs in quick succession:
21 Mar 2019 04:12:58.453 * DB saved on disk
21 Mar 2019 04:12:58.454 * DB saved on disk
21 Mar 2019 04:12:58.456 * DB saved on disk
21 Mar 2019 04:13:50.153 * DB saved on disk
21 Mar 2019 04:13:51.573 * DB saved on disk
21 Mar 2019 04:13:52.282 * DB saved on disk
21 Mar 2019 04:21:18.539 * DB saved on disk
21 Mar 2019 04:21:18.540 * DB saved on disk
21 Mar 2019 04:21:18.541 * DB saved on disk
During this time period, Redis drops all of our keys - twice!
Any ideas why this is happening? The system is not under memory or CPU pressure, all graphs look normal.
Other useful things:
Memory usage of redis is increasing but still well within the bounds of the box (as expected as we're storing streams of data)
Number of keys is flat during this time period, until they all get dropped
Latency is flat the whole time
Redis reports no expired or evicted keys
The slow log bumps up during that time period and then is flat again immediately after.
EDIT
On further debugging using info commandstats it seems that several flushall commands were made during this time period, which would explain the DB saves from looking at source.
I have no idea why these flushes are occurring - we do not have any flush commands in our applications. Debugging continues.

CRON JOB run script daily at 6PM

so i set up my php page and the cronjob, in order to create an excel file with data in it, everyday at 6PM, but it is not working as intended.
it executes the script every 2 days not daily.
0 11 * * * wget -q --spider http://example.com/UserReport_Export.php
it had actually created the file at 1 june, and 3 june, while 2 june was not there. any idea?
i also put the time 11 in the script cuz of server time, in order to match 6PM in my local time, maybe it affects anything? thank you
You can set the cron like this:
0 18 * * * wget -q --spider http://example.com/UserReport_Export.php
There are also online tools for cron jobs like this which can help out. The cron that you have set seems to be correct, if 11 is matching your local time.
it had actually created the file at 1 june, and 3 june, while 2 june was not there. any idea?
Is it possible for the script or anything else to interfere with your excel file?

ForEach Loop Container does not pass variables to DataFlowTask

I have ForEach Loop Container (FEL Container) with DataFlowTask (DFT) in my SSIS. In DFT there is OLE DB SRC with SQL Comand coming from variable "DateToSelect" . This variable have two parameters - StartDate & EndDate, which I pass in my FELC Container. The goal of SSiS is to load table on SQL Server for certain periods
Problem is the next - SSIS runs without errors - but information do not load in table. I have put Breakpoints and Watches to see either my variables are changing - and they really do, but still nothing is loading.
Have somebody ideas what is wrong?
I have mentioned one more detail - package downloads data only for april, while period is from january to april (or from february to april) and if I change period from january to march - again no data is downloading..

Google BigQuery Create/append to table from Avro internalError

I am fairly new to BigQuery, however I have been able to create and append to existing BigQuery tables from Avro files (both in EU region) until 1-2 days ago. I am only using the web UI so far.
I just attempted to create a new table from a newly generated Avro file and got the same error, details below:
Job ID bquijob_670fd977_15655fb3da1
Start Time Aug 4, 2016, 3:35:45 PM
End Time Aug 4, 2016, 3:35:53 PM
Write Preference Write if empty
Errors:
An internal error occurred and the request could not be completed.
(error code: internalError)
I am unable to debug because there is not really anything to go by.
We've just released a new feature to not creating the root field: https://cloud.google.com/bigquery/release-notes.
Since you have imported Avro before, we have excluded your project from this new feature. But unfortunately we had a bug with exclusion, and will cause reading Avro to fail. I think you most likely ran into this problem.
The fix will be released next week. If you don't need the root field and want to enable your project for the new feature, please send the project id to me, huazhang at google.com. Sorry for the trouble this has caused.

Unexpected error, while copy query results to table using google java bigquery api for GAE

I'm trying to copy query result to a new table, but I'm getting an error :
Copy
11:13am
query results to 49077933619:TelcoNG.table
Errors:
Unexpected. Please try again.
Job ID: job_090d08f69c8e4199afeca131b5279393
Start Time: 11:13am, 12 Aug 2013
End Time: 11:13am, 12 Aug 2013
Copy Source: 49077933619:_8dc46c0daeb9142a91aa374aa59d615c3703e024.anon17d88e0e_0960_4510_9740_b753109050f4
Destination Table: 49077933619:TelcoNG.table
I get this error since last Thursday (8 Aug 2013).
This functionality has worked perfect for over an year.
Are there any changes in the API?
It looks like there is a bug in detecting which datacenters a table created as the result of a query has been replicated to. You're doing the copy operation very soon after the query finished, and before the results have finished replicating. As I mentioned, this is a bug and we should fix it very soon.
As a short-term workaround, you can either wait a few minutes between the query and the copy operation, or you can set a destination table on your query, so you don't need to do the copy operation at all.