undo editing standard process in Powerapps - process

I'm using PowerApps with Dynamics 365. While I was practicing on Powerapps I had edited the "Lead to Opportunity" process and deleted one of the stages, I thought that the effect of deletion will be only on my app and solution, but I was surprised that the stage disappeared also from Sales hub and Sales team member apps too.
Please, what can I do to return the original process?

If you just need that OOB BPF, try exporting in a solution from another environment and import into your affected organization. That should solve the problem.
There is no soft delete, once saved and published - it overwrites, that being said - take a backup in a solution (zip file) and keep it aside, else do “save as” before doing any customization. Even better, keep a lower environment for POC, R&D without disturbing your Prod replica Dev environment.

Related

How revert to tfs 2015 stock agile process template

After (or before) we convert from TFS 2012.2 to TFS 2015.3 (which we have done just fine in a test run) we would like to revert our team project to the standard TFS 2015 Agile process template, and no longer use the customized agile process that we had modified from TFS 2012. We are quite willing to delete all of our work items and start over, but need to keep the team project history and change sets. Anyone know how to do this? Answers to prior questions on this did not address this situation. Thanks.
There is no easy way to do it. Basically the steps require you to use a lot of witadmin commands. Start by deleting any work item types that were added and don't exist in the default template.
Then push the standard work item definition for each work item type.
Then push the categories
Then push the process configuration
Then delete any fields that are no longer used
That should bring you back to the standard template.
An alternative you could try is to use the WitMorph project. You can write a set of rules to migrate your data back into working order.

Composite C1 - develop locally, sync to live site

I have a couple of Composite C1 CMS websites.
To edit them currently I use the web based CMS on the live site.
However - I would like to update the (code & content) in Visual Studio locally - then sync to the web. However, if my local copy is older than that online (e.g. a non techy client has edited something on the live site) and I Web Deploy - it will go over the top of the new file on the server.
I need a solution that works out the newest change? I can't find anything in Google or the C1 docs.
How can I sync - preferably using Web Deploy. Do I need some kind of version control?
Is there a best practice for this - editing the live site through the web interface seems a bit dicey & is slow.
The general answer to this type of scenario seems to be to use the Package Creator. With that you can develop locally, add the files you've changed to a package, and install that package on a live site. This solution does not at all cover all the parts of you question though, and has certain limitations:
You cannot selectively add content to a package. It's all pages or no pages.
Adding datatypes is easy, but updating them later requires you to delete the datatype (and data), and recreate the datatype.
In my experience packages works well for incremental site updates, if you limit the packages content to be front end stuff, like css, images and such.
You say you need a solution that works out the newest changes - I believe the only solution to this is yourself, with the aid of some tooling. I don't think there's a silver bullet solution here.
Should you use a version control system? Yes! By all means. Even if you are not sharing your code with anyone, a VCS is a great way to get to know Composite C1 from a file system perspective, as you can carefully track what files are changed on disk, as you develop. This knowledge is crucial when you want to continuously add features the a website that is already alive and kicking - you need to know what to deploy, and what not to touch.
Make sure you read the docs on how Composite fits in VCS: http://docs.composite.net/Configuration/C1-and-Version-Control
I assume that your sites are using the XML data storage (if you where using SQL Data Store, your content would not be overridden upon sync).
This means that your entire web application lives in one folder on disk on the web server, which can be an advantage here.
I'll try to outline a solution that could work for you, although I must stress that I've never tried this - I'm making it up as I type.
Let's say you're using git, download the site in it's entirety from the production web server, and commit the whole damned thing* to your master branch.
Then you create a new feature branch from that commit, and start making the changes you want to deploy later, and carefully commit your work as you go along, making sure you only commit the changes that are needed for your feature to work, to the feature branch.
Now, you are ready to deploy, and you switch back the master branch, and again download the entire site and commit it to master.
You then merge your feature branch into the master branch, and have git do all the hard work of stitching you changes in with the changes from the live site. There are bound to be merge conflicts, and that is where you will have to jump in, and decide for yourself what content needs to go live.
After this is done and tested, you can web deploy the site up to the production environment.
Changes to the live site might have occurred while you where merging, so consider closing the site, or parts of it, during this process.
If you are using SQL Data Store i suggest paying for a tool like Red Gate's SQL Compare and SQL Data Compare or SQL Delta, to compare your dev database to the production database, and hand pick SQL scripts that can be applied to the production database along with your feature deployment.
'* Do consider using a .gitignore file to avoid committing certain files - refer to the docs for mere info.
I suppose you should use the Package Creator
Also have a look here: http://docs.composite.net/Configuration/C1-and-Version-Control

Many user using one program (.exe) that includes datasets

I created a time recording program in vb.net with a sql-server as backend. User can send there time entries into the database (i used typed datasets functionality) and send different queries to get overviews over there working time.
My plan was to put that exe in a folder in our network and let the user make a link on their desktops. Every user writes into the same table but can only see his own entries so there is no possibility that two user manipulate the same dataset.
During my research i found a warning that "write contentions between the different users" can be occur. Is that so in my case?
Has anyone experience with "many user using the same exe" and where that is using datasets and could give me an advice whether it is working or what i should do instead?
SQL Server will handle all of your multi-user DB access concerns.
Multiple users accessing the same exe from a network location can work but it's kind of a hack. Let's say you wanted to update that exe with a few bug fixes. You would have to ensure that all users close the application before you could release the update. To answer you question though, the application will be isolated to each user running it. You won't have any contention issues when it comes to CRUD operations on the database due to the network deployment.
You might consider something other than a copy/paste style publishing of your application. Visual Studio has a few simple tools you can use to publish your application to a central location using ClickOnce deployment.
http://msdn.microsoft.com/en-us/library/31kztyey(v=vs.110).aspx
My solution was to add a simple shutdown-timer in the form, which alerts users to saving their data before the program close att 4 AM.
If i need to upgrade, i just replace the .exe on the network.
Ugly and dirty, yes... but worked like a charm for the past 2 years.
good luck!

How do I distribute updates to a Access database front end?

I've got an Access 2007 database that I developed which connects to SQL Server for the actual data storage. I used the Package Solution Wizard to create a distributable installer which included access runtime (with an ACCDE file) which I went around and installed on 15 or so PCs. Anyway, my question is, what is the best way to distribute updates to this database? Right now I'd need to go around and remove and reinstall. That's not a problem... I was just wondering if there was another way.
I've tried leaving the front end on a network share but it seems that most people suggest storing the front-end on the local machine, which makes sense. The problems I've run into when I leave it on a network share (at least with Access 2003 mdbs) is that I find myself needing to compact and repair often and I also have to kill the open sessions (user's who have the file open) when upgrading. I would imagine it could also hypothetically create an unnecessary bottleneck if the user was not on the local network.
Automating front-end distribution is trivial. It's a problem that has been solved repeatedly. Tony Toews's http://autofeupdater.com is one such solution that is extremely easy to implement and completely transparent to the end user.
We developed a vbscript 'launcher' for our access apps. That is what is linked to on the start menu of user's pcs and it does the following.
It checks a version.txt file located on a network share to see whether it contains different text to a locally stored copy
If the text is different it copies the access mdb and the new version.txt to the user's hard drive.
Finally it runs the mdb in access
In order to distribute an update to the user's pc all that is required is to change the text in version.txt on the network share.
Perhaps you can implement something similar to this
Make a batch file on the server (network drive).
Create a shortcut link to that batch file.
Copy the shortcut to User's Desktop.
When user double-clicks on shortcut, it will copy a fresh copy from network to local.
Replace old database.adp on the server drive when you update a new version.
Each user gets a copy of database.adp on their machine.
Remove Security warning when opening file from network share is here.
Batch File
#ECHO OFF
REM copy from network drive to local
xcopy "Your_Network_Drive\database.adp" "C:\User\database.adp" /Y /R /F
REM call your database file - Access 2007
"C:\Program Files\Microsoft Office\Office12\MSAccess.EXE" "C:\User\database.adp"
This is a very old post and I used the autofeupdater until it stopped working so I wrote one of my own and it has evolved over the last few years into something that I have used with many clients. It's so simple to use and there is no interface. Just an EXE and a very simple config file.
Please check it out here. I can also help with custom solutions if none of the configurations work for your needs. http://www.dafran.ca/MS-Access-Front-End-Loader.aspx
After trying all of the solutions above (not exactly these solutions but these are the common suggestions in the Access community), I developed a system entirely within Access using VBA that allows an admin DB to create and publish objects to client DBs without the need for user intervention or management of multiple DB files.
This approach has several benefits:
1. It simplifies the development process by having a dedicated environment (admin DB) for development and testing totally separate from the client DBs.
2. It simplifies the update/distribution process by allowing a developer to push out updates in real time that client DBs can implement in the background, without involving users. Can also allow devs to roll back to previous versions if desired.
3. It could be used as a kind of change management system within Access for developers who want to commit multiple changes to objects and modules and retain past changes.
4. It allows for easier user access control by allowing an admin to easily assign certain objects to specific users/roles without needing to maintain multiple versions of the DB.
I will hopefully post the code to GitHub soon, I just have to get clearance from my workplace to release it. I will edit this post to include the link when I have.
We have usually kept the Access front ends on network drives, and just put up with the need to compact and repair on a regular basis. You will probably find you need to do that even when they are installed locally, anyway.
If you must have it installed locally, there are various tools which will enable you to "push out" software updates, and the guys over on ServerFault would have more information on those. Assuming such tools aren't available, the only other option I can think of is to write a small loader program that checks the local .MDB against a master copy on the server, and re-copies it across if they are different, before then launching the MDB.

Best IT/back-office system hacks? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Lots of people have things that their systems do for them or for their teams. Source control post-commit hooks are a standard example: have an automated build system that checks out the latest source, compiles, tests, and packages it is a back-office hack that most of us probably use.
What other cool things have you done?
We had one developer in our team who wasn't familiar with the concept of a subversion conflict. He deduced that if he simply deleted all that weird stuff in his code and clicked resolve that everything was ok (i.e. knocking out all the other changes in the file....)
Regardless to say, after the 5th time this occurred, and the 5th time that I had to explain why that defect that I just closed was reoccuring, I wrote a script.
It would diff for the changes to a file to see whether the consecutive checkin deleted all the previous changes and that they were done by the nameless developer.
It would then send an email to the boss with a description of what happened, and how much work was lost during the checkin.
There was no 7th occurrence.
We have a traffic-light that shows whether our daily build succeeds, has failed tests or simply doesn't build.
Also, we have a light bar that lights up for a few seconds whenever we receive an upload from a customer.
We aren't staffed 24x7 but we have critical processes that run throughout the night. We created an in-house alerts system to notify us of serious system issues, failed mission-critical processes, etc. It uses text-to-speech to create a descriptive message and then connects to our automated dialer to call the appropriate people with the message.
Working at a web design company I configured our dev server so we could see a working copy of a project in real time by a sub domain name. So if your name was joe and you were working on project jetfuel you would go to joe.jetfuel.test-example.com and you could see your changes instantly without committing.
This was a simple hack that used sub domain names as a partial directory structure. Our htdocs path looked like this htdocs/tag/project. We had a script (a php app that you would access by setup.test-example.com) that would create a new tag name for you and checkout whatever version you wanted and call the deploy script for that project. If it succeeded it would forward you to the new sub domain. You could then work on this new copy by a samba share.
This worked really well for us since we always deployed to the same linux build and our projects had simple database requirements.
Our original reason for doing this was because our developers worked on all kinds of different platforms. Besides fixing this platform problem this was awesome for viewing changes and testing. We had all kinds of tags ranging from peoples names, trunk versions, test tags, all the way to prototypes like jquery-menu-hack.jetfuel.test-example.com
Now that I look back I wonder how much easier it would have been to run virtual machines.
We had a dev working on a classic ASP site that didn't believe in source control. The code went from his machine straight to the production box. This lead to issues with lost changes or the inability to revert back to a stable version. Since CruiseControl.Net has the ability to monitor a directory, I added a project that actually checked in files whenever they were copied to production. Completely backward from CC.Net's original intent, but we didn't lose any more code.
Put in a pre-commit hook that checks the bug comment refers to an open bug, assigned to the user doing the checkin. (SCMBug can do this).
Then to make life REALLY interesting, spell check the comments!!
The commit comment, and the one in the code. (spell is my buddy)
Run the code through a code formatter set to compayn standard; and diff it to the original: if it's not in company offical format: reject the commit.
Do a coverage test with the unit test build.
Email all mistakes/errors caused to the development team.
I left OUT the name of the developer. They know they did it.
Not exactly hacks, but a couple of must-haves for IT dev work:
If you're using subversion, you've got to use CommitMonitor. (http://tools.tortoisesvn.net/CommitMonitor) It lets you monitor svn repositories for new commits & then review the new commits. Great if you're wanting to stay on top of what your team is doing. Particularly if you have a couple of juniors that need to be watched. ;)
Rsnapshot (http://www.rsnapshot.org/) is also invaluable - we have complete backup snapshots of our entire filesystem every four hours going back 2 years, and every day beyond that. It's like a data cube for your filesystem! The peace of mind this gives is pure bliss. :)
Hardly a hack, but back in the day, on our speedy VAX 11/730, our overnight process would print the file "BLAMMO.TXT" on the printer if something went amiss. Every morning, the first stop was the printer when coming in.
Back in the dotCom days about 9 years ago, I had to hack a failover system between two different locations. We had a funky setup with a powerbuilder front end website, and powerbuilder managment tool. Data was stored in MSSQL 7.0. The webservers used IPX to communicate to the SQL Servers (don't ask). Anyway, I was responsbile for coming up with a failover plan.
I ended up hacking together some linux boxes, and had them run our external DNS. One at each location. We had a remote site w/ webserver, and sql server I got SQL transaction replication working over a 128k ISDN IPX connection (of all things). Then built a monitoring tool at our production site to send packets out to various upstream network handoffs. If we experienced more than 20% outage the primary site, the monitoring tool ran a perl script on the Debian box to change DNS and point to our 2ndary. Our secondary had a heartbeat w/ our primary DNS, and monitoring station. It would duplicate records unless it lost both connections then it would roll over to pointing DNS to backup location.
The primary site would shut down the SQL server at the primary location to break replication. Automated site to site failover using 128k ISDN IPX connection :)
Back at my previous job, we had to audit many tables for data changes (inserts, updates and deletes). Our support crew had to be able to search through this data to find changes that users made.
The temporary solution that had become semi-permanent was to store each non-select query. However this was a large system, that the table would grow by about 1.5GB a day.
The solution I came up with was to create a script that for all tables in an external list, created the appropriate triggers that audit each table, row, column, before and after, when and by whom and store it in our new audit table. This table grew by about 10% the size of the older version and stored much more usable data. It enabled us to create a UI to search and view every change made to our data, without requiring any knowledge of SQL for our support team or business users.
This is at a lesser level, but I am fairly proud of a make file I wrote for compiling code for my research. It only needs to be given your source and header file names that can take care of the rest all by itself (though it does make the one assumption that you will not be compiling any header files into objects, only source files get compiled). The other downsides are the fact that it relies on the GNU make program's second expansion feature, so I don't know if it works on other make programs. Additionally the compiler used needs to support something similar to gcc's -MM feature. Here is hoping that no one laughs at it.
-include prereqs.mk
HEADERS=$(SRC_DIR)/gs_lib.h $(SRC_DIR)/gs_structs.h
SOURCES=$(SRC_DIR)/main.cpp $(SRC_DIR)/gs_lib.cpp
OBJECTS=$(patsubst $(SRC_DIR)/%.cpp,$(OBJ_DIR)/%.o,$(SOURCES))
release: FLAGS=$(GEN_FLAGS)$(OPT_FLAGS)
release: $(OBJECTS) prereqs.mk
$(CXX) $(FLAGS) $(LINKER_FLAGS) $(OUTPUT_FLAG) $(EXECUTABLE) $(OBJECTS)
prereqs.mk: $(SOURCES) $(HEADERS)
$(CXX) $(DIR_FLAGS) $(MAKE_FLAG) $(SOURCES) | sed 's,\([abcdefghijklmnopqrstuvwxyz_]*\).o:,\1= \\\n,' > $#
.SECONDEXPANSION:
$(OBJECTS): $$($$(patsubst $(OBJ_DIR)/%.o,%,$$#))
$(CXX) $(FLAGS) $(NO_LINK_FLAG) $(OUTPUT_FLAG) $# $(patsubst $(OBJ_DIR)/%.o,$(SRC_DIR)/%.cpp,$#)
Obviously I dropped the definition of a number of variables, but I think it gets the idea across.
Since my coding tools and style are compatible with the requirements of this script I like to use it. All I need to do to add (a) new piece(s) of source code is add its name(s) to the appropriate variable and the rest is taken care of.
We have Twitter accounts for many projects which tweet things like commit messages, notices from builds, failed unit tests, deployments, bug tracking activity - any kind of event associated with the project. Running a client like Twitter Gwibber (which displays a pop-up for each new status) is a great way to stay in touch with the activity on the projects you are interested. Using Twitter is good as you can take advantage of all the 3rd party apps - such as the iPhone clients.
Add commit-hook check for VRML/3d-model files with absolute path to textures/images. f:/maya/my-textures/newproject/xxxx.png just doesn't belong on the server.
Back in the 1993, when source control systems were really expensive and unwieldy, the company I worked about had an in-house source control built as 4DOS scripts. It wasn't as sofisticated as most current source control systems, for example it didn't have branching or integrates, but it did the basic job of supporting revisions history, checkout/checkin and rudimentary conflict resolution.