Locks don't get released - locking

I have a Notes DB where doc. might be edited by diferent users. So i enable locking to avoid conflicts. After working some weeks w/o problems now it seems that locks don't get released after closing of the document. So i disable locking and the users can edit again.
Is there any action (compact, copy) to do, to get that stable again ?
This is a single DB (others working fine) on Domino 8.

I will assume that you are saying that this behavior (locked documents not unlocking when users close the document) is persistent (whenever document locking is enabled) and consistent among all users.
You can try a fixup on the database- the server console command would be:
Load fixup yourdb.nsf
You could also create a "Release All Locks" agent (must be run under an ID with Manager access) with target "All Documents in Database":
SELECT #IsAvailable($Writers) | #IsAvailable($PWriters) | #IsAvailable($PTWriters);
#DocLock([UNLOCK])
An education-based strategy might be to retrain users to use Actions-Lock Document and Action-Unlock Document rather than relying on automatic locks from editing or closing a document.
A good source for deeper understanding of document locking is this eView article.

These symptoms sound similar to this issue reported on the IBM support site. Details on the intracacies of Document locking are here, suffice to say it's not bullet proof, see here. If this is the same issue it was resolved in release 7.0.2 and 6.5.5 FP2. Given that you're using version 8 it should remain fixed, unless it has been introduced as a regression.

Related

ArcGIS file geodatabase lock issues

Currently, our workforce has about 15 users who all view the same file geodatabases. Typically, when viewing the data only, a schema lock (sr.lock) is created for whichever feature class is being viewed inside of the geodatabase. Here lately, one of the users has been creating both a schema lock and a read lock (rd.lock) on the same feature classes. However, it does not do this for all feature classes being viewed. I can start an editing session on the geodatabase in which the read lock is created, and it will create my editing locks (ed.lock) and allow me to edit just fine like normal. It's only when I go to save my work that "a lock cannot be acquired". It at this point will not allow me to save my edits like normal. What would be causing just a few feature layers to be creating the random read locks on top of the schema locks?
I hope you have found your answer. In case you have not, we encountered the same problem. Turns out that if any one of the users on a mutually used Feature Class OPENS the TABLE it creates the rd.lock file. This is what prevents others' from SAVING their edits. Our solution has been to ask neighbors if they have that particular table open, if so, Close JUST the table, then the editor can save his/her edits. I hope this helps; I know it is a massive headache, but this solution has worked for us thus far.

Exchanging work before accurev promote

My colleague and I are participating in a huge project located in Accurev. We've already created own workspaces backed with some stream (let's call it zzz-stream) which is used by many other participants, not only by us.
The point is that we want to exchange our work between our workspaces, make some changes, exchange again, etc. BEFORE making the changes accessible for others, i.e. in other words we don't want to propagate our changes until it is stable and tested, but we want be able to work on it together.
My idea was to create new stream (yyy-stream) backed with zzz-stream, and then change our workspaces to be backed with yyy-stream. But unfortunately I have no rights to create streams.
My second idea was to use a workspace as backed stream, but it doesn't work because Accurev can't use ws as backed stream.
Is there any solution for our problem?
UPD: I accepted Brad's answer as most detailed. However Accurev is too heavy and sluggish to be used effectively. So actually I prefer to use Git for internal needs over the accurev workspace. (see Accurev externally, git internally)
Your idea of creating the yyy-stream is the EXACT right way to do it. The other options are decent workarounds for one-off situations, but creating the extra stream is simple and is fully leveraging AccuRev's capabilities.
That being said, I understand that your admins have stream creation locked down. They of course want control, but should be allowing for maximizing developer productivity and not forcing workarounds like this. My guess is they have stream creation locked down to a particular group being enforced by the server-admin trigger. One common thing I have seen other large sites do is:
- allow streams to be freely created off of a list of acceptable streams (easy to do in the trigger)
- enforce naming rules on the stream creation. This is important to admins in large sites to keep things organized. Again, this is very easy to enforce via the server-admin trigger.
Bottom line, if this is a common situation, work with the admins to allow this capability as per the above. If they have any questions, they are more than welcome to contact AccuRev and we will help them out.
Your idea on using another stream for you and your peer is a good one and is commonly called a collaboration stream. If your site has stream creation locked down, you would need to work with your AccuRev administrator to make that happen.
Another option is for you and the other developer to pull the keeps from the other workspace into your own stream. This relies on both of you being diligent about doing keeps and then you can look at the history of the other developer's workspace to find the keep operation, right-click that transaction and then select Send to Workspace. The destination workspace must be your own.
A third option (more for a situation where you are in your workspace and know exactly what file you want to grab the other users changes)is to bring up the version browser for the file, right click and select history/browse versions. Look for the other workspace, highlight the version in that workspace, right click and select send to workspace. This will checkout that version into your workspace.
This is similar to the change palette suggestion but quicker if your looking to this on a file basis.
Another idea is to use different version control system (e.g. git or svn) over Accurev workspace to exchange the changes and keep our history separated from zzz-stream. (similar to Accurev externally, git internally) Only changed files should be added to other VCS, not whole project. Some merge problems occur though.

Lotus Notes: Replication conflict caused by agent and user running on the document at same time

In one of the lotus notes db, too frequent replication/save conflicts are caused reason being a scheduled agent and any user working on the document at the same time.
is there any way to avoid this.
Thanks,
H.P.
Several options in addition to merging conflicts:
Change the schedule The best way to avoid it is to have your scheduled agents running at times when users are not likely to be accessing the system. If the LastContact field on a Client document is updated by an agent every hour as it checks all Contact documents, maybe the agent should run overnight instead.
Run the agent on user action It may also be the case that the agent shouldn't be running on a schedule, but should be running when the user takes some action. For example, run the agent to update the Client document when the user saves the supporting Contact document.
Break the form into smaller bits A third thing to consider is redesigning your form so that not every piece of data is on a main form. For example, if comments on recent contacts with a client are currently held in a field on the Client document, you might change the design to have a separate ClientMeeting form from which the comments on the meeting are displayed in an embedded view or computed text (or designed using Xpages).
Despite the fact that I am a developer, I think rep/saves are far more often the result of design decisions than anything else.
You can use the Conflict Handling option on the form(s) in question and select either Merge Conflicts or Merge/No Conflicts in order to have Notes handle merging of edit conflicts.
From the Help database:
At the "Conflict Handling" section of the Form Info tab, choose one of the following options for the form:
Create Conflicts -- Creates conflicts so that a replication conflict appears as a response document to the main document. The main document is selected based on the time and date of the changes and an internal document sequence number.
Merge Conflicts -- If a replication conflict occurs, saves the edits to each field in a single document. However, if two users edit the same field in the same document, Notes saves one document as a main document and the other document as a response document marked as a replication conflict. The main document is selected based on the criteria listed in the bullet above.
Merge/No Conflicts -- If replication occurs, saves the edits to each field in a single document. If two users edit the same field in the same document, Notes selects the field from the main document, based on time and date, and an internal document sequence number. No conflict document is generated, instead conflicting documents are merged into a single document at the field level.
Do Not Create Conflicts -- No merge occurs. IBM® Lotus® Domino(TM) simply chooses one document over another. The other document is lost.
In later versions of Notes there is the concept of document locking, and used properly that can prevent conflicts (but also add complexity).
Usually most conflicts can be avoided by planning to run the agents late at night when users aren't on the system. If that's not an option, then locking may be the best solution. If the conflicts aren't too many, you might benefit from adding a view filtered to show only conflicts, which would make findind and resolving them easier.
IMHO, the best answer to conflicts between users and agents is to make sure that they are operating on different documents. I.e., there are two documents with a common key. They can be parent and child if it would be convenient to show them that way in a view, but they don't have to be. Just call them DocA and DocB for the purposes of this discussion.
DocA is read and updated by users. When a user is viewing DocA, computed field formulas can pull information from DocB via DbLookup or GetDocField. Users never update DocB.
DocB, on the other hand, is read and updated by agents, but agents only read DocA. They never update them.
If you design your application any other way, then you either have to use locking -- which can create the possibility of not being able to update something when you need to, or accepting the fact that conflicts can happen occasionally and will need to be resolved.
Note that even with this strategy, you can still have conflicts if you have multiple replicas of the database. You can use the 'Conflict Handling' section of the Form properties to help minimize replication conflicts, as per #Per Henrik Lausten's answer, but since you are talking about an existing please also see my comment to his answer for additional info about what you would have to do in order to use this feature.
If this is a mission critical application, consider creating a database with lock-documents. That means, every time a user opens a document, a separate lock-document is created.
Then code the agent to see if lock-documents exist for every document that the agent wants to modify. If it does, skip that document.
Document-close should remove the doc-lock.
The lock-doc should be created on document open, not just read. This way, when a user has the document open in read mode, the agent will not be able to modify as well. This is to prohibit, that the user might change to editmode afterwards and make changes.
If the agent has a long modification time, it should create lock-docs as well.

Accurev - why not Auto-Update?

Why isn't it standard behavior for Accurev to automatically run an "Update" upon opening the program? "Update" updates a user's local sandbox with the latest files from the building/promoted area.
It seems like expected functionality that the most recent files should be synchronized first.
I'm not claiming that it should always update, but curious as to why an auto-Update wouldn't be correct.
Auto-updating could produce some very unwanted results.
Take this scenario: you're in the middle of a development task, but you've made a mistake and need to revert a file that you just modified. So you open AccuRev, but before you have a chance to "revert to most recent version", you are bombarded with 100 files that have been changed upstream including the one you want to revert. You are now forced into the position of resolving all the merge conflicts before your solution will build, including the merge of your (possibly unstable) code in progress.
Requiring the user to manually update keeps a protective 'bubble' around the developer, allowing them to commit (keep) changes within their own workspace without bringing down code changes that could destabilise the work in their sandbox. When the developer gets to a point where his code is ready to share with others, that is the appropriate time to do an update and subsequently build/retest the merged codebase before promoting.
However there is one scenario that I do believe auto-updating could be useful: after a workspace is reparented. i.e. when a developer's workspace is moved from one part of the stream hierarchy to another. Every time we reparent we have to do a little dance:
Accept the confirmation dialog that reminds us (rather verbosely) that we need to update our workspace before we can promote any changes.
Double-click the workspace to view its files.
Wait for AccuRev to do a "Pending" search, to determine whether any file changes are waiting to be committed.
And finally, perform the Update.
Instead of just giving us a confirmation dialog, it would be nice if AccuRev could just ask us if we want to Update immediately.
I guess it depends on preference. I for one wouldn't like the auto-update feature.
Imagine you have a huge project and you don't want to build it every time you start Accurev. But you also can't debug because the source files and debugging info no longer correspond.

Best IT/back-office system hacks? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Lots of people have things that their systems do for them or for their teams. Source control post-commit hooks are a standard example: have an automated build system that checks out the latest source, compiles, tests, and packages it is a back-office hack that most of us probably use.
What other cool things have you done?
We had one developer in our team who wasn't familiar with the concept of a subversion conflict. He deduced that if he simply deleted all that weird stuff in his code and clicked resolve that everything was ok (i.e. knocking out all the other changes in the file....)
Regardless to say, after the 5th time this occurred, and the 5th time that I had to explain why that defect that I just closed was reoccuring, I wrote a script.
It would diff for the changes to a file to see whether the consecutive checkin deleted all the previous changes and that they were done by the nameless developer.
It would then send an email to the boss with a description of what happened, and how much work was lost during the checkin.
There was no 7th occurrence.
We have a traffic-light that shows whether our daily build succeeds, has failed tests or simply doesn't build.
Also, we have a light bar that lights up for a few seconds whenever we receive an upload from a customer.
We aren't staffed 24x7 but we have critical processes that run throughout the night. We created an in-house alerts system to notify us of serious system issues, failed mission-critical processes, etc. It uses text-to-speech to create a descriptive message and then connects to our automated dialer to call the appropriate people with the message.
Working at a web design company I configured our dev server so we could see a working copy of a project in real time by a sub domain name. So if your name was joe and you were working on project jetfuel you would go to joe.jetfuel.test-example.com and you could see your changes instantly without committing.
This was a simple hack that used sub domain names as a partial directory structure. Our htdocs path looked like this htdocs/tag/project. We had a script (a php app that you would access by setup.test-example.com) that would create a new tag name for you and checkout whatever version you wanted and call the deploy script for that project. If it succeeded it would forward you to the new sub domain. You could then work on this new copy by a samba share.
This worked really well for us since we always deployed to the same linux build and our projects had simple database requirements.
Our original reason for doing this was because our developers worked on all kinds of different platforms. Besides fixing this platform problem this was awesome for viewing changes and testing. We had all kinds of tags ranging from peoples names, trunk versions, test tags, all the way to prototypes like jquery-menu-hack.jetfuel.test-example.com
Now that I look back I wonder how much easier it would have been to run virtual machines.
We had a dev working on a classic ASP site that didn't believe in source control. The code went from his machine straight to the production box. This lead to issues with lost changes or the inability to revert back to a stable version. Since CruiseControl.Net has the ability to monitor a directory, I added a project that actually checked in files whenever they were copied to production. Completely backward from CC.Net's original intent, but we didn't lose any more code.
Put in a pre-commit hook that checks the bug comment refers to an open bug, assigned to the user doing the checkin. (SCMBug can do this).
Then to make life REALLY interesting, spell check the comments!!
The commit comment, and the one in the code. (spell is my buddy)
Run the code through a code formatter set to compayn standard; and diff it to the original: if it's not in company offical format: reject the commit.
Do a coverage test with the unit test build.
Email all mistakes/errors caused to the development team.
I left OUT the name of the developer. They know they did it.
Not exactly hacks, but a couple of must-haves for IT dev work:
If you're using subversion, you've got to use CommitMonitor. (http://tools.tortoisesvn.net/CommitMonitor) It lets you monitor svn repositories for new commits & then review the new commits. Great if you're wanting to stay on top of what your team is doing. Particularly if you have a couple of juniors that need to be watched. ;)
Rsnapshot (http://www.rsnapshot.org/) is also invaluable - we have complete backup snapshots of our entire filesystem every four hours going back 2 years, and every day beyond that. It's like a data cube for your filesystem! The peace of mind this gives is pure bliss. :)
Hardly a hack, but back in the day, on our speedy VAX 11/730, our overnight process would print the file "BLAMMO.TXT" on the printer if something went amiss. Every morning, the first stop was the printer when coming in.
Back in the dotCom days about 9 years ago, I had to hack a failover system between two different locations. We had a funky setup with a powerbuilder front end website, and powerbuilder managment tool. Data was stored in MSSQL 7.0. The webservers used IPX to communicate to the SQL Servers (don't ask). Anyway, I was responsbile for coming up with a failover plan.
I ended up hacking together some linux boxes, and had them run our external DNS. One at each location. We had a remote site w/ webserver, and sql server I got SQL transaction replication working over a 128k ISDN IPX connection (of all things). Then built a monitoring tool at our production site to send packets out to various upstream network handoffs. If we experienced more than 20% outage the primary site, the monitoring tool ran a perl script on the Debian box to change DNS and point to our 2ndary. Our secondary had a heartbeat w/ our primary DNS, and monitoring station. It would duplicate records unless it lost both connections then it would roll over to pointing DNS to backup location.
The primary site would shut down the SQL server at the primary location to break replication. Automated site to site failover using 128k ISDN IPX connection :)
Back at my previous job, we had to audit many tables for data changes (inserts, updates and deletes). Our support crew had to be able to search through this data to find changes that users made.
The temporary solution that had become semi-permanent was to store each non-select query. However this was a large system, that the table would grow by about 1.5GB a day.
The solution I came up with was to create a script that for all tables in an external list, created the appropriate triggers that audit each table, row, column, before and after, when and by whom and store it in our new audit table. This table grew by about 10% the size of the older version and stored much more usable data. It enabled us to create a UI to search and view every change made to our data, without requiring any knowledge of SQL for our support team or business users.
This is at a lesser level, but I am fairly proud of a make file I wrote for compiling code for my research. It only needs to be given your source and header file names that can take care of the rest all by itself (though it does make the one assumption that you will not be compiling any header files into objects, only source files get compiled). The other downsides are the fact that it relies on the GNU make program's second expansion feature, so I don't know if it works on other make programs. Additionally the compiler used needs to support something similar to gcc's -MM feature. Here is hoping that no one laughs at it.
-include prereqs.mk
HEADERS=$(SRC_DIR)/gs_lib.h $(SRC_DIR)/gs_structs.h
SOURCES=$(SRC_DIR)/main.cpp $(SRC_DIR)/gs_lib.cpp
OBJECTS=$(patsubst $(SRC_DIR)/%.cpp,$(OBJ_DIR)/%.o,$(SOURCES))
release: FLAGS=$(GEN_FLAGS)$(OPT_FLAGS)
release: $(OBJECTS) prereqs.mk
$(CXX) $(FLAGS) $(LINKER_FLAGS) $(OUTPUT_FLAG) $(EXECUTABLE) $(OBJECTS)
prereqs.mk: $(SOURCES) $(HEADERS)
$(CXX) $(DIR_FLAGS) $(MAKE_FLAG) $(SOURCES) | sed 's,\([abcdefghijklmnopqrstuvwxyz_]*\).o:,\1= \\\n,' > $#
.SECONDEXPANSION:
$(OBJECTS): $$($$(patsubst $(OBJ_DIR)/%.o,%,$$#))
$(CXX) $(FLAGS) $(NO_LINK_FLAG) $(OUTPUT_FLAG) $# $(patsubst $(OBJ_DIR)/%.o,$(SRC_DIR)/%.cpp,$#)
Obviously I dropped the definition of a number of variables, but I think it gets the idea across.
Since my coding tools and style are compatible with the requirements of this script I like to use it. All I need to do to add (a) new piece(s) of source code is add its name(s) to the appropriate variable and the rest is taken care of.
We have Twitter accounts for many projects which tweet things like commit messages, notices from builds, failed unit tests, deployments, bug tracking activity - any kind of event associated with the project. Running a client like Twitter Gwibber (which displays a pop-up for each new status) is a great way to stay in touch with the activity on the projects you are interested. Using Twitter is good as you can take advantage of all the 3rd party apps - such as the iPhone clients.
Add commit-hook check for VRML/3d-model files with absolute path to textures/images. f:/maya/my-textures/newproject/xxxx.png just doesn't belong on the server.
Back in the 1993, when source control systems were really expensive and unwieldy, the company I worked about had an in-house source control built as 4DOS scripts. It wasn't as sofisticated as most current source control systems, for example it didn't have branching or integrates, but it did the basic job of supporting revisions history, checkout/checkin and rudimentary conflict resolution.