Why do we use snapshot in RTC - rtc

I've been reading about snapshot and doing it practically.
I got the idea of snapshot but I could not figure out why do we use snapshot.
could anyone please tell me what is use of snapshot in RTC, or any scenario where snapshot is used.

A snapshot is use to keep a coherent state of a stream (multiple components and their current baselines/delivered change sets) or repo workspace.
For instance, when an RTC build is requested, RTC will start by making a snapshot on the build-dedicated repo workspace specified by the build definition, in order for the user to easily create a stream based on that snapshot later on, if the build is problematic and warrant its own debug environment.
A snapshot is very useful to initialize a stream in one operation: select a snapshot and all the component baselines included in that snapshot appear in the stream.
(Note that there is also a snapshot in planning: here I was referring only to a source control snapshot)

Related

How to break down terraform state file

I am looking for guidance/advice on how to best break down a terraform state file into smaller state files.
We currently have one state file for each environment and it has become unmanageable so we are now looking to have a state file per terraform module so we need to separate out the current state file.
Would it be best to point it to a new s3 bucket, then run a plan and apply for the broken down modules and generate a fresh state file for each module or is there an easier or better way to achieve this?
This all depends upon how your environment has been provisioned and how critical the down time is ?
Below are the two general scenarios, I can think of from your question.
First Scenario - ( if you can take down time )
Destroy everything that you got and start from scratch by defining separate backend for each module and provision the infrastructure from that point on. So now you can have backend segregation and infrastructure management becomes easier.
Second Scenario - ( If you can't take down time )
Lets' say you are running mission critical workloads that absolutely can't take any down time.
In this case, you will have to come up with proper plan of migrating huge monolith backend to smaller backends.
Terraform has the command called terraform state mv which can help you with migrating one terraform state to another one.
When you work on the scenario, start from lower level environments and work from there.
Note down any caveats that you are encountering during these migration in lower level environments, the same caveats will apply in higher level environments as well
Some useful links
https://www.terraform.io/docs/cli/commands/state/mv.html
https://www.terraform.io/docs/cli/commands/init.html#backend-initialization
Although the only other answer (as of now) lists only two options - the other option is that you can simply make terraform repos (or folders, however you are handling your infrastructure) - and then do terraform import to bring existing infrastructure into those (hopefully) repos.
Once all of the imports have proven to be successful, you can remove the original repo/source/etc. of the monolithic terraform state.
The caveat is that the code for each of the new state sources must match the existing code and state, otherwise this will fail.

Clean up and prevent excessive data accumulation in an MobileFirst Analytics 8.0 environment

Our analytics data is taking up almost 100% disk space on the file system. How do we remove the old er data, and prevent such situation from occurring again?
You can follow the url, https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/installation-configuration/production/server-configuration/#setting-up-jndi-properties-for-mobilefirst-server-web-applications to setup JNDI properties in Mobilefirst. You need to
set the TTL values base on you business requirements, and keep the values as short as possible, so that huge data accumulation does not occur again. To clean up the existing data, you can perform the following
Setup the Analytics server with JNDI properties set for TTL and other configuration
Stop the Analytics Server
Delete the /analyticsData directory contents to discard any initial data (this will not affect as there is no data accumulated yet. So that there is no directories within the analyticsData directory) Note:
/analyticsData is the default location, please refer
http://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/installation-configuration/production/analytics/configuration/ to verify the actual value in your environment.
Restart the Analytics server. (Now the index will be created brand new with TTL in effect causing the proper data purging in place)

Updating OpenFlow group table bucket list in OpenDaylight

I have a mininet (v2.2.2) network with openvswitch (v2.5.2), controlled by OpenDaylight Carbon. My application is an OpenDaylight karaf feature.
The application creates a flow (for multicasts) to a group table (type=all) and adds/removes buckets as needed.
To add/remove buckets, I first check if there is an existing group table:
InstanceIdentifier<Group> groupIid = InstanceIdentifier.builder(Nodes.class)
.child(Node.class, new NodeKey(NodId))
.augmentation(FlowCapableNode.class)
.child(Group.class, grpKey)
.build();
ReadOnlyTransaction roTx = dataBroker.newReadOnlyTransaction();
Future<Optional<Group>> futOptGrp = rwTx.read(LogicalDatastoreType.OPERATIONAL, groupIid);
If it doesn't find the group table, it is created (SalGroupService.addGroup()). If it does find the group table, it is updated (SalGroupService.updateGroup()).
The problem is that it takes some time after the RPC call add/updateGroup() to see the changes in the data model. Waiting for the Future<RPCResult<?>> doesn't guarantee that the data model has the same state as the device.
So, how do I read the group table and bucket list from the data model and make sure that I am indeed reading the same state as the current state of the device?
I know that
Add/UpdateGroupInputBuilder has setTransactionUri()
DataBroker gives transaction to read/write
you should use transaction chaining
But I cannot figure out how to combine these.
Thank you
EDIT: Or do I have to use write transactions in stead of RPC calls?
I dropped using RPC calls for writing flows and switched to using writes to the config datastore. It will still take some time to see the changes appear in the actual device and in the operational datastore but that is ok as long as I use the config datastore for both reads and writes.
However, I have to keep in mind that it is not guaranteed that changes to the config datastore will always make it to the actual device. My flows are not that complicated in the sense that conflicts are unlikely to happen. Still, I will probably check consistency between operational and configuration datastore.

Possibility of restoring a deleted stream?

A new stream called stream 1 is created.
I deliver some changes to stream 1.
Later, I delete stream 1.
So:
Is there a possibility to restore a deleted stream?
If I am not able to restore the stream then, will I loose my changes delivered to it?
Is there a possibility to restore a deleted stream?
Not easily unless you had created snapshots (we covered snapshots in your previous question "Consistency of snapshot code in rtc?"): in that case, when you delete a stream, RTC would ask you to select another existing stream in order to keep ownership of those snapshots.
If you do, then it is trivial to re-create a new stream from a snapshot, assuring you to recover all components at their exact state as recorded by the snapshot.
But if you didn't set any snapshot, then you have to manually re-enter all the components, and set them to (for instance) their most recent baselines.
If I am not able to restore the stream then, will I loose my changes delivered to it?
In any case, as mentioned in the thread "Delete a Stream - any side-effects?"
Change-sets exist independently of any stream, so deleting a
stream does not delete any change-sets.
It will just be harder to get the exact list of change sets back to a new stream if they were only delivered to stream 1 (that you deleted).
Especially if those change set were never grouped inside a baseline (for a given component) or, as explained above, with a snapshot.
But those change sets are not gone.

docker commit running container

When committing a running container with docker commit, is this creating a consistent snapshot of the filesystem?
I'm considering this approach for backing up containers. You would just have to docker commit <container> <container>:<date> and push it to a local registry.
The backup would be incremental, as the commit would just create a new layer.
Also would the big amount of layers hurt io performance of the container drastically? Is there a way to remove intermediate layers at a later point in time?
Edit
By consistent I mean that every application that is designed to survive a power-loss should be able to recover from this snapshots. Basically this means that no file must change after the snapshot is started.
Meanwhile I found out that docker supports multiple storage drivers (aufs, devicemapper, btrfs) now. Unfortunately there is hardly any documentation about the differences between them and the options they support.
I guess consistency is what you define it to be.
In terms of flattening and the downsides of stacking too many AUFS layers see:
https://github.com/dotcloud/docker/issues/332
docker flatten is linked there.
I am in a similar situation. I am thinking about not using a dedicated data volume container instead committing regularly to have some kind of incremental backup. Beside the incremental backup the big benefit is for a team developing approach. As newcomer you can simply docker pull a database image already containing all the data you need to run, debug and develop.
So what I do right now is to pause before commit:
docker pause happy_feynman; docker commit happy_feynman odev:`date +%s`
As far as I can tell I have no problems right now. But this is a developing machine so no I have no experience on heavy load servers.