Use case:
A/B symmetrical partitions
flashing a new image was successful
system rebooted to the newly flashed partition
almost everything is running fine except few components -> not a successful update, it should boot the old partition on the next reboot
Rauc has a mechanism for explicitly marking the success with rauc status mark-good.
I can't find anything similar in the swupdate documentation. Is it there or the swupdate considers just a successful flashing as a fully successful update?
see progree-interface in the documentation to forward the result of the update. Switching A/B via bootloader is set as last step in SWUpdate - it is never done if an update is unsuccessful
Related
I have a working implementation of iCloud. Now I want to improve conflict handling by adding some merge functionality. I've been trying to come up with a consistent way of forcing a conflict for testing purposes but I haven't had luck so far, conflicts don't occur consistently when I expect them to occur. This might indicate that I'm doing something wrong, or maybe that I just misunderstood something about how iCloud works (yet another thing, I mean).
I'm using UIDocument and yes, I'm listening to the UIDocumentStateChangedNotification. In fact, I do get some occasional conflict notifications. Also, I only have one file in iCloud.
Having two devices using the same iCloud accout, here is the flow of events I was expecting to always cause a conflict:
Open the file on both devices (both devices are now correctly seeing the same content). Note: Here is the only time openWithCompletionHandler is called, after this it's never called again.
Make some change on device A and call saveToURL.
Wait some time to allow the change to propagate.
Make some other change on device B and call saveToURL.
Wait some time to allow the change to propagate.
EXPECTED: The app should be getting a conflict notification from iCloud. OBSERVED: A conflict does occur very occasionally, but most of the time what happens is simply that the UIDocument gets its UIDocumentStateEditingDisabled flag set and then cleared back after half a second or so (I'd guess editing is being disabled while the iCloud daemon is pulling the version from the other device and saving it in the local ubiquitous directory).
Much like a version control system like SVN, I was expecting the version from device B to cause a conflict because an "update" was required in order to get the version uploaded by device A.
Am I wrong expecting a conflict in the scenario I just described? Why? Is there any other way to consistently force a conflict?
Thanks!
I would have thought a better way to cause a conflict would be to:
Make sure both devices have an up-to-date copy of the data
Put both devices into Airplane Mode to prevent any iCloud updates
Change the data in the same place on both devices, each with different new data
Turn the network back on
Wait for the changes to propagate
From the docs:
Conflicts occur when two instances of an app change a file locally and both changes are then transferred to iCloud. For example, this can happen when the changes are made while the device is in Airplane mode and cannot transmit changes to iCloud right away. When it does happen, iCloud stores both versions of the file and notifies the apps’ file presenters that a conflict has occurred and needs to be resolved.
The way you are doing it (allowing time for sync, altering the documents differently) seems like it shouldn't cause a conflict.
iCloud works basically the same as version control system - except that you can only access the conflict versions (when conflict happens).
When a device pulled ver_1 from iCloud, edit, save, and find the server has a different version (ver_2 or newer) than it expected, a conflicted version will be created.
After the initial sync, you can:
turn off wifi on device B, edit & save.
edit on device A, save.
turn on wifi on device B.
A conflict will come soon.
Why isn't it standard behavior for Accurev to automatically run an "Update" upon opening the program? "Update" updates a user's local sandbox with the latest files from the building/promoted area.
It seems like expected functionality that the most recent files should be synchronized first.
I'm not claiming that it should always update, but curious as to why an auto-Update wouldn't be correct.
Auto-updating could produce some very unwanted results.
Take this scenario: you're in the middle of a development task, but you've made a mistake and need to revert a file that you just modified. So you open AccuRev, but before you have a chance to "revert to most recent version", you are bombarded with 100 files that have been changed upstream including the one you want to revert. You are now forced into the position of resolving all the merge conflicts before your solution will build, including the merge of your (possibly unstable) code in progress.
Requiring the user to manually update keeps a protective 'bubble' around the developer, allowing them to commit (keep) changes within their own workspace without bringing down code changes that could destabilise the work in their sandbox. When the developer gets to a point where his code is ready to share with others, that is the appropriate time to do an update and subsequently build/retest the merged codebase before promoting.
However there is one scenario that I do believe auto-updating could be useful: after a workspace is reparented. i.e. when a developer's workspace is moved from one part of the stream hierarchy to another. Every time we reparent we have to do a little dance:
Accept the confirmation dialog that reminds us (rather verbosely) that we need to update our workspace before we can promote any changes.
Double-click the workspace to view its files.
Wait for AccuRev to do a "Pending" search, to determine whether any file changes are waiting to be committed.
And finally, perform the Update.
Instead of just giving us a confirmation dialog, it would be nice if AccuRev could just ask us if we want to Update immediately.
I guess it depends on preference. I for one wouldn't like the auto-update feature.
Imagine you have a huge project and you don't want to build it every time you start Accurev. But you also can't debug because the source files and debugging info no longer correspond.
I'm writing a WLST script to deploy some WAR's and an EAR. However, intermittently, the script will time out because it can't seem to get an edit lock (this script is part of a chain of many other scripts). I was wondering, is there a way to override or stop any current locks on the server? This is only a temporary solution, but in the interest of time, it will do for now.
Thanks.
You could try setting a wait period and timeout:
startEdit([waitTimeInMillis], [timeoutInMillis], [exclusive]).
Are other scripts erroring out, leaving the session locked? You could try adding exception handling around those. Also, if you have 'Automatically acquire lock" enabled in the Admin Console and you use the admin console sometimes it can cause problems if you are running scripts at the same time, even though you are not making "lock-requiring" changes.
Also, are you using the same user for the chained scripts?
Within WLST, you can pass a number as a parameter to gain an exclusive lock. This allows the script to grab a different lock than the regular one that's used whenever an administrator locks from the console. It also prevents two instances of the same script from stepping on each other.
However, this creates complex change merge scenarios that are best avoided (by processes).
Oracle's documentation on configuration locks can be found here.
Alternatively, if you want the script to temporarily relieve any existing locks regardless of the pending changes, you may as well disable change management from the console, minimizing the inconvenience caused.
WLST also contains the cancelEdit command that you could run before you startEdit. Hope one of these options pan out!
To take the configuration change lock from another administrator:
If another administrator already has the configuration lock, the following message appears: Another user already owns the lock. You will need to either wait for the lock to be released, or take the lock.
Locate the Change Center in the upper left corner of the
Administration Console.
Click Take Lock & Edit.
Make your configuration changes.
In the Change Center, click Activate Changes. Not all changes take
effect immediately. Some require a restart (see Use the Change
Center).
As long as you're running WLST as an administrative user, you should be able to jump into an existing edit session with the edit() command - I've done a quick test with two admin users, one in the Admin Console, and one using WLST, and it appears to work fine - I can see the changes in the Admin Console session inside the WLST interpreter.
You could put a very simple exception handler around your calls to startEdit that will log the exception's stack trace, but do nothing else. And then rely on the edit call to pop you into the change session.
Relying on that is going to be tricky though if another script has started an edit session and is expecting to be able to commit that change session itself - you'll be getting exceptions and unreliable behaviour across multiple invocations.
I'm trying to configure a set of build configurations in TeamCity 6 and am trying to model a specific requirement in the cleanest possible manner way enabled by TeamCity.
I have a set of acceptance tests (around 4-8 suites of tests grouped by the functional area of the system they pertain to) that I wish to run in parallel (I'll model them as build configurations so they can be distributed across a set of agents).
From my initial research, it seems that having a AcceptanceTests meta-build config that pulls in the set of individual Acceptance test configs via Snapshot dependencies should do the trick. Then all I have to do is say that my Commit build config should trigger AcceptanceTests and they'll all get pulled in. So, lets say I also have AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC
So far, so good (I know I could also turn it around the other way and cause the Commit config to trigger AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC - problem there is I need to manually aggregate the results to determine the overall success of the acceptance tests as a whole).
The complicating bit is that while AcceptanceSuiteC just needs some Commit artifacts and can then live on it's own, AcceptanceSuiteA and AcceptanceSuiteB need to:
DeploySite (lets say it takes 2 minutes and I cant afford to spin up a completely isolated one just for this run)
Run tests against the deployed site
The problem is that I need to be able to ensure that:
the website only gets configured once
The website does not get clobbered while the two suites are running
If I set up DeploySite as a build config and have AcceptanceSuiteA and AcceptanceSuiteB pull it in as a snapshot dependency, AFAICT:
a subsequent or parallel run of AcceptanceSuiteB could trigger another DeploySite which would clobber the deployment that AcceptanceSuiteA and/or AcceptanceSuiteB are in the middle of using.
While I can say Limit the number of simultaneously running builds to force only one to happen at a time, I need to have one at a time and not while the dependent pieces are still running.
Is there a way in TeamCity to model such a hierarchy?
EDIT: Ideas:-
A crap solution is that DeploySite could set a 'in use flag' marker and then have the AcceptanceTests config clear that flag [after AcceptanceSuiteA and AcceptanceSuiteB have completed]. The problem then becomes one of having the next DeploySite down the pipeline wait until said gate has been opened again (Doing a blocking wait within the build, doesnt feel right - I want it to be flagged as 'not yet started' rather than looking like it's taking a long time to do something). However this sort of stuff a flag over here and have this bit check it is the sort of mutable state / flakiness smell I'm trying to get away from.
EDIT 2: if I could programmatically alter the agent configuration, I could set Agent Requirements to require InUse=false and then set the flag when a deploy starts and clear it after the tests have run
Seems you go look on the Jetbrains Devnet and YouTrack tracker first and remember to use the magic word clobber in your search.
Then you install groovy-plug and use the StartBuildPrecondition facility
To use the feature, add system.locks.readLock. or system.locks.writeLock. property to the build configuration.
The build with writeLock will only start when there are no builds running with read or write locks of the same name.
The build with readLock will only start when there are no builds running with write lock of the same name.
therein to manage the fact that the dependent configs 'read' and the DeploySite config 'writes' the shared item.
(This is not a full productised solution hence the tracker item remains open)
EDIT: And I still dont know whether the lock should be under Build Parameters|System Properties and what the exact name format should be, is it locks.writeLock.MYLOCKNAME (i.e., show up in config with reference syntax %system.locks.writeLock.MYLOCKNAME%) ?
Other puzzlers are: how does one manage giving builds triggered by build completion of a writeLock task read access - does the lock get dropped until the next one picks up (which would allow another writer in) - or is it necessary to have something queue up the parent and child dependency at the same time ?
Imagine I have an AIR application to update: the preceding version number is 0.0.1, the current one is 0.0.2. Now, the preceding app is installed on many different pcs. I want to update ONLY some clients, based on a particular ID. Is it possible to skip update process for some clients?
The short answer is yes, it is possible. You can do just about whatever you need if you write your own code to check for updates, download updates to a local file, then call System:Updater.update() or Air.update:ApplicationUpdater.installFromAIRFile() The old AIRUpdater.js example from AIR 1.0 can get your started.
The problem is you'll have to update all of your 0.0.1 clients with new code before they can make a decision on 0.0.2. And if you don't have a good way to ensure they've all been updated before deploying your next version, you'll probably want to change the location of the update descriptor file in your intermediate version. Otherwise you may end up with straggler 0.0.1's that skip the intermediate version and update to 0.0.2 without your ID checking.
And I haven't tried this yet, but it might be even easier to use the newer Air.update:ApplicationUpdater class from AIR 1.5 and put your ID checking in the updateStatus event.