Jira Tempo - Synchronize deleted hours with external system - jira-rest-api

I'm trying to insert in my system hours loaded in Jira (with add-on Tempo Timesheets) using the API of Tempo.
Steps
I try to insert in my system the last inserted hours in Jira without problem.
I edit in Jira some already exported hours to my system, and I can get all time entries Synchronized with my system.
Problem
The problem is when I delete some hours in Jira, when I call the Tempo API the hours are no longer displayed in the result, neither as deleted nor as modified.
So I can not know which ones I have to delete in my system
Does anyone know how to get deleted hours in Jira in a range of dates?

When delete some worklog (time entry) from Tempo, internally change the worklog description to "Worklog DELETED in Jira" and the hours value change to "0.0"
If these two things happen, we can determinate that the worklog has been deleted.

Related

Recommendations for multiple migration runs?

Could anyone provide any best practices about multiple migration runs? Moving from TFS 2017.3.1 to Azure DevOps Service. Dealing with a fair number of work items (32k). Of course, TSTU throttling is making the run take a long time, so I was thinking of pushing what I could up front, then a second pass to pick up the new work items since the first big push. So...enabling UpdateSourceReflectedId would set the ReflectedWorkItemId on the source items that have already been migrated. But what happens if someone changes a work item that has already been pushed? Would the history delta get picked up? How is that typically resolved...I was thinking maybe a Querybit like: ReflectedWorkItemId <> '' and ChangedDate > (last run time), but is that necessary? Those already exist on target...would ReplayRevisions pick up only the missing changes? TIA...
I usually do the following for large runs:
Open work items edited in last 90 days
Closed work items edited in last 90 days
open out to more days in chunks
The important thing to note is that links are created only when both ends of the link exist.
After a long run you can then rerun "edited in last month" to bring any changes a cross.
Changes to avoid in the Source:
changing work item type
moving work item between team project
We handle these, but loosly.

MS Access Backend data corruption

I have an Access database that was designed and developed back in 1997 - 99. all user interaction is through forms and reports, there is no end user access to the backend tables. It has worked flawlessly for the last 19 plus years. Both front and back end are still in .mdb format, but the front ends which are local to the workstation and are replaced with a clean copy upon every login are running on Access 2016. There are at a maximum, 6 users in the database at one time, usually there are only 4.
Starting about 90 days ago the back end would randomly corrupt when a request to write a record was made. The error message is “Database is in an unrecognized format”. We have replaced old workstations and the Server that host the back end, in addition we replaced the switch that all computers connect to. On the workstations that were not replaced Office has been re-installed and all updates applied to it. The corruptions cannot be reproduced consistently, it happens in different forms and from different workstations randomly. After the corruption, we have to delete the lock file, compact and repair the Back end and it will work just fine until the next corruption, and the data that was attempted to be written is there, so there has been no Data loss.
The back end data was rebuilt last year to remove the random primary key values, that were created when the database was replicated over a dial up modem back when the database was first developed. The replication functionality was turned off approximately 17 years ago. The back end has been rebuilt again from scratch, each table was created in a new Database and all the index's and relationships were rebuilt. The data from each table was exported to a Text file and then imported into the new database.
There were no changes made to the front end in the prior three or four weeks before this issue started happening. Just to ensure that it was not something in the frontend, it was rolled back to a version that was working fine in February of this year, unfortunately that did not resolve the issue
None of these steps have resulted in resolving the back end corruption, and if anything, the corruption is happening more frequently. The only thing that works at this point is to have one user at a time in the database, as soon as the second user opens the frontend, the back end will corrupt within a few minutes.
Any thoughts or ideas would be greatly appreciated.
Thank you
Steve Brewer
Update:
This is a known bug introduced by one of the Office/Windows updates. See http://www.devhut.net/2018/06/13/access-bu...ognized-format/ for all the details and workaround/solution.

MS ACCESS TransferSpreadsheet VBA to include extra information in import data

I am building an Access 2010 db which will store and query information relating to time spend by users in our team. Part of the reporting needs to include whether timesheets have been submitted on time.
The process is currently being managed in Excel but is becoming cumbersome due to the growing size of the consolidated data. In the current process, the flag on whether someone is late with their timesheet is applied manually.
Instead of manually adding a Yes / No value to the excel data, I wondered whether it was possible to set up separate TransferSpreadsheet processes in Access to upload the excel data (and attach them to separate command buttons) such that, depending on which one is executed, the import process adds a Yes or a No value to the last column in the data as it's being uploaded.
That way we can import the excel data for those who submitted their timesheets on time (and 'stamp' them Yes for being on time) and then any subsequently late submitted timesheet data can be imported later (and 'stamped' with a No).
I have spent several hours looking at online forums and instruction pages but cannot find anything close to what I am trying to achieve, hence the reason for posting this here.
This is just one of the options I am considering but my VBA skills are insufficient to establish whether such a process could be handled in VBA. All help appreciated. Thanks.
Solved this one myself with a bit of perseverance. Ended up running a few DoCmd.RunSQL commands to Alter / Delete / Insert the tables I had and used a 'join' table to load the data from excel and then ran a command to append the data from the 'join' table to the main table. I just invoke slightly different commands to update the table field based on whether the data has been submitted late or on time.

asp.net mvc; edit option only for one user at a time

I have a table which has three fields with a few records. If an user is going to edit a record from the table, other users won't be allowed to edit that record the same time. What kind of steps i can take to make this happen?
Alot of people from a desktop application background will wonder how this is done in a web application.
Locked record flag
One approach in the desktop world is to have a Boolean column on the row that indicates that it is being edited, and by whom. You could certainly do this with a web app, but it is a very bad approach because if a user visits the Edit page, placing the record into a locked state, then leaves the page, it will forever be in a locked state. You have no definitive way to tell that the user doesn't still have the edit page open.
Time sensitive lock
The airline reservation approach is a variation on the above, but you would also have a LockedUntilUtc which is a datetime indicating how long the record is locked for. Let's say Bob visits a page for a record, when serving the apge from the GET action you also set the locked flag, and set LockedUntilUtc to 10 minutes in the future. 5 minutes later Sarah visits the page but gets a "currently locked" error because you checked the LockedUntilUtc and it is currently in the future. Another 6 minutes elapses(total of 11 minutes since locked) and someone visits the page and the LockedUntil is now in the past, so you give the lock to the new user.
This seems like a reasonable compromise, but it is rife with problems sure to frustrate users. First, there is no easy way to queue up users who need access to edit the record. Sarah could try 10 times, and then just as it passes 10 minutes, Jimmy visits the page and because he was the first person after the lock expired, he grabbed the next lock without Sarah getting a chance. Sarah calls your help desk and says she waited 10 minutes for the lock to expire, and it's now been 15 minutes and she still can't get to the page. Your helpdesk probably doubts she really waited a full 10 minutes, back and forth ensues.
You also must implement a client side timer/display for whoever currently has the lock so they know how much time they have left before it can expire.
Optimistic concurrency
This is the right approach in most cases. You don't actually lock the record in any way at all. Instead, many users can visit the edit page. When they save an edit, the form includes both the original values and the new edited values. The server will compare the original values from the form, with the current values in the database, to see if there was an interim edit.
The original values are from some point in the past(when Bob initially visited the edit page). The current values are from right now. Between the past and now, if Sarah also visited the edit page, and successfully saved changes to the database values, then Bob's original values will be different from the current values in the database. Thus when Bob attempts to save his changes, the server will see that his original values are different than current values in the DB, and throw an error. You will need to decide how you handle this situation. Usually you let the user know that someone else has edited the page since they did, and refresh the page, and they lose their edits. Entity Framework supports optimistic concurrency.
Ajax'ified Optimistic Concurrency
You can also have the client occasionally ping the server with original values so the server can check to see if your page is stale(i.e. other user changed something) and popup a message. This improves the user's experience by giving the user earlier notice that another user has edited the page. Thus they don't get to far along in making edits which they are going to lose anyways. They can also take note/copy/paste their edits out of the browser so that can refresh the page and have a reference of what they changed.
There is a Timestamp column in SQL Server which can work in tandem with Entity Framework to lower the overhead involved in checking for changes. Such that you don't need to keep the entire record of original values in each client and pass them back in forth: http://www.remondo.net/entity-framework-concurrency-checking-with-timestamp/
Granular edits
One approach we use alot is to ajax'ify every field and edits to a single field are committed immediately. This is accomplished using a jquery library called x-editable. The user edits a single field, confirms the edit, and that value is sent to server. You could combine this with optimistic concurrency if you wanted to check the entire record for changes, or just the single field. If changes are detected, then you reject the edit and refresh the page. This can be a much friendlier experience for the user, primarily because the user gets the "Another user edited page" error instantly when editing a single field. This prevents them from wasting alot of time editing a large number of fields, only to find their edit was rejected and they have to redo all of their edits again. Instead, they edit a single field, get the error, page refreshes, they only have to repeat that one field edit and continue from there.
http://vitalets.github.io/x-editable/demo-bs3.html

Automated Heroku PostgreSQL Updates without Client Request

I am new to web development. I am building a simple text based web game. I am using heroku and postgresql. I have sql table for users and their coin amount(their earnings).
I can receive/transmit data from this database by using requests made from players. However what I want to achieve is to automate the coin addition to each users account.
So let's say at the beginning of each hour, I want to add 15 coins to each user. How can I possible achieve this kind of automation with heroku and postgresql ?
I tried searching for almost an hour, but I wasn't even able to find the name of this process :(
While you could schedule this (as sonnyhe notes), it's probably better not to.
Instead, just update the value when you're updating their balance for some other reason, by adding the difference between the time you last added coins and the current time, extracting the hours, and multiplying by 15.
If the user asks to just view the balance, all you need to do is display its last stored value plus the hours since then * 15.
That way you're not doing lots of unnecessary updates and causing unnecessary load.
Here is a gem you can look into.
Just include the gem rufus-scheduler in your gemfile.
The you can set something up in you config/initializer/scheduler.rb
scheduler.every '15m' do
# Update all coins with 15 more.
end