Why use sysprep for Sharepoint 2010 Developer VMs? - sharepoint-2010

I have read several articles about creating a Sharepoint Developer VM. They all say to "sysprep" them. Why (exactly) must the sysprep be done? What kind of problems (and why) will we run into if we don't sysprep them?
(I suppose what I am asking is, what would be the difference in doing "sysprep" and just bringing up the VM, changing its Name/IP, reboot then install SP?)

I've had success in the past with just copying Hyper-V vhd's as a method of cloning VM's - however, I now use sysprep when cloning any of my machines as it's been mentioned as a best practice in many places. And, it does some nice things like allowing you to cleaning up a bunch of stuff that I don't want to duplicate and letting me choose a new name for the machine on boot. From MS Sysprep Technical Reference:
Sysprep prepares a computer for disk
imaging or delivery to a customer by
configuring the computer to create a
new computer security identifier (SID)
when the computer is restarted. In
addition, Sysprep cleans up user- and
computer-specific settings and data
that must not be copied to a
destination computer.
And you may want to read Russinovich's post on The Machine SID Duplication Myth (and Why Sysprep Matters) for more good explanation of how SIDs work and the very last paragraph has another reason for going this route:
Note that Sysprep resets other
machine-specific state that, if
duplicated, can cause problems for
certain applications like Windows
Server Update Services (WSUS), so
Microsoft’s support policy will still
require cloned systems to be made
unique with Sysprep.
Good luck!

Related

How to keep my vb.net program relatively secure

I wrote a small program in vb.net and I'm looking for a simple way to keep people from just copying the executable and running it on another machine for reverse engineering without the installer. I understand that if people want the program bad enough they will figure out a way to get a hold of it, I'm basically just looking for some kind of deterrent to keep our competitors from walking around and copying it.
Logan,
The bad news is that you cannot stop people from reverse engineering your desktop application. You have 2 options:
Create a web application instead. The code will run securely on your server.
Use Remote Desktop Services. This way you can install your program on your server and let the users use it via RDS. Here is an article that illustrates the concept and how to implement it on Microsoft Azure: https://technet.microsoft.com/en-us/library/cc755055.aspx
The standard approach is to create a license key that will only work on a specific machine and store it in the registry. This can be something as simple as:
When your app starts get a unique machine id (http://www.dreamincode.net/forums/topic/181408-get-unique-machine-ids/)
Perform a one way hash on it
See if this value is stored in the registry
If it isn't, display a dialog displaying the unique machine id and asking for the 'license'
Accept input of the license so they don't need to ask again
You can manually calculate the one way hash yourself for computers that you want to run the software on.
This won't stop a determined hacker but it'll keep the 99.9% of people who can't hack your software honest.

Reading from database file over network in visual basic, with no shared drives

The machine I'm trying to access is an industrial PC that serves as the interface for some PLC-automated equipment. The computer creates a data record (an .mdb database) of the various machine settings so that they can be reviewed later if desired. I've created an application in Visual Basic 2010 to display and sort that information once it's been copied from the machine to someone's laptop.
What I'd like to do, though, is allow the user to access the database from their laptop over the network; each PC has a static IP address on the customer's LAN. Currently, we can use Teamviewer to transfer files, but I'd like to include that ability in my data viewing application. Without changing any settings on the industrial PC (i.e. leaving the network file sharing alone, and avoiding installing any sort of SQL server software), how can I access this information? The database files can get pretty large (40+ mb) so I'm trying to avoid any sort of FTP transfer, which would undoubtedly take forever.
The database is already set up on an ODBC connection (which is how the pc stores the PLC information in the database), and I suspect that this may be the key, but between my lack of any thorough understanding regarding networking and the internet's general distrust of ODBC (well earned, from what I've seen so far), I'm having a hard time finding any useful information.
If anyone could point me towards some useful tutorials, or give me a good place to start, I'd appreciate it.
I think you are out of luck. Think about it, if you could access the file, without sharing it, wouldn't that be a serious security problem?
What you can do: Use the always existing share \host\c$ to access the file. For that you'll need administrative privileges for the host.
But be aware: Even if this "works", it introduces problems:
It will be slow, because you are essentially copying 40MB over the wire over and over again
If you are not carefull, you might lock the *.mdb file while accessing it. Access Databases are not really meant to be accessed by multiple users/processes. That didn't prevented Microsoft to try to fake it. Todo so, a *.ldb (L as in Lock) file will be created, to signal other processes the file is locked.
Tldr: Don't do it.

How do I distribute updates to a Access database front end?

I've got an Access 2007 database that I developed which connects to SQL Server for the actual data storage. I used the Package Solution Wizard to create a distributable installer which included access runtime (with an ACCDE file) which I went around and installed on 15 or so PCs. Anyway, my question is, what is the best way to distribute updates to this database? Right now I'd need to go around and remove and reinstall. That's not a problem... I was just wondering if there was another way.
I've tried leaving the front end on a network share but it seems that most people suggest storing the front-end on the local machine, which makes sense. The problems I've run into when I leave it on a network share (at least with Access 2003 mdbs) is that I find myself needing to compact and repair often and I also have to kill the open sessions (user's who have the file open) when upgrading. I would imagine it could also hypothetically create an unnecessary bottleneck if the user was not on the local network.
Automating front-end distribution is trivial. It's a problem that has been solved repeatedly. Tony Toews's http://autofeupdater.com is one such solution that is extremely easy to implement and completely transparent to the end user.
We developed a vbscript 'launcher' for our access apps. That is what is linked to on the start menu of user's pcs and it does the following.
It checks a version.txt file located on a network share to see whether it contains different text to a locally stored copy
If the text is different it copies the access mdb and the new version.txt to the user's hard drive.
Finally it runs the mdb in access
In order to distribute an update to the user's pc all that is required is to change the text in version.txt on the network share.
Perhaps you can implement something similar to this
Make a batch file on the server (network drive).
Create a shortcut link to that batch file.
Copy the shortcut to User's Desktop.
When user double-clicks on shortcut, it will copy a fresh copy from network to local.
Replace old database.adp on the server drive when you update a new version.
Each user gets a copy of database.adp on their machine.
Remove Security warning when opening file from network share is here.
Batch File
#ECHO OFF
REM copy from network drive to local
xcopy "Your_Network_Drive\database.adp" "C:\User\database.adp" /Y /R /F
REM call your database file - Access 2007
"C:\Program Files\Microsoft Office\Office12\MSAccess.EXE" "C:\User\database.adp"
This is a very old post and I used the autofeupdater until it stopped working so I wrote one of my own and it has evolved over the last few years into something that I have used with many clients. It's so simple to use and there is no interface. Just an EXE and a very simple config file.
Please check it out here. I can also help with custom solutions if none of the configurations work for your needs. http://www.dafran.ca/MS-Access-Front-End-Loader.aspx
After trying all of the solutions above (not exactly these solutions but these are the common suggestions in the Access community), I developed a system entirely within Access using VBA that allows an admin DB to create and publish objects to client DBs without the need for user intervention or management of multiple DB files.
This approach has several benefits:
1. It simplifies the development process by having a dedicated environment (admin DB) for development and testing totally separate from the client DBs.
2. It simplifies the update/distribution process by allowing a developer to push out updates in real time that client DBs can implement in the background, without involving users. Can also allow devs to roll back to previous versions if desired.
3. It could be used as a kind of change management system within Access for developers who want to commit multiple changes to objects and modules and retain past changes.
4. It allows for easier user access control by allowing an admin to easily assign certain objects to specific users/roles without needing to maintain multiple versions of the DB.
I will hopefully post the code to GitHub soon, I just have to get clearance from my workplace to release it. I will edit this post to include the link when I have.
We have usually kept the Access front ends on network drives, and just put up with the need to compact and repair on a regular basis. You will probably find you need to do that even when they are installed locally, anyway.
If you must have it installed locally, there are various tools which will enable you to "push out" software updates, and the guys over on ServerFault would have more information on those. Assuming such tools aren't available, the only other option I can think of is to write a small loader program that checks the local .MDB against a master copy on the server, and re-copies it across if they are different, before then launching the MDB.

Run application from documents instead of program files

I'm working on creating a self updating application and one issue I'm running into on Vista and Windows 7 is needing to have admin privileges in order to update the client. I've run into issues with clients that have their users running under restricted permissions and they would have to have IT log onto every machine that needed to update the client since the users were not able to.
A possible work around I'm considering is to have the launcher application installed into Program Files as normal, and having the real application that it updates installed in the users documents somewhere, so that they could update and run new versions without IT becoming involved.
I'm wondering what potential gotchas I'm missing here or what I should be aware of before heading down this path. I'm aware that click-once does something very similar, and I'd be using it, except I need the ability to do silent updates, without any user interaction.
This is how it is supposed to be. The last thing most IT departments want is a user randomly updating a piece of software. This could have all sorts of unintentional side effects such as incompatibility with the older version's files, new and possibly insecure functionality, etc. This is why IT departments disable Windows Update and do their updates manually in a controlled fashion.
If the users want an updated version of the software they should be requesting it from their IT department. Those computers and infrastructure don't belong to them, they're simply borrowing time on them from the company they work for so they can do their job.
Is there an issue with having only one installation of your program? Is it particularly large, for example?
Do you require admin privileges to run your program?
If not, odds are you don't need the Program Files folder.
I suggest you forgo installing to Program Files entirely and just install your program into the user's folder system at <userfolder>\AppData\ProgramName.
If you happen to be using .NET, look into the ClickOnce deployment mechanism. It's got a great self-updating feature that'd probably make your life a lot easier.
Edit: Just saw your last sentence. ClickOnce can force the user to update.
A couple of things:
If you decide to move your app to some place in documents, make sure that your application writes data transparently to where your program is installed, e.g. if there are hard coded paths anywhere in the code that are pointing to bad places. Perhaps this is not an issue for you, but might be something to keep in mind.
We solved this in pretty much the same way when we decided to implement a "live update" feature. But instead we installed a service which is running with administrator rights. This service in turn can run installers once the program needs to be updated. With this type of solution you don't even have to move your applicaton out of program files.
Cheers !
Edit:
Another neat thing with having a service running as administrator. Is that you could create a named pipe communication with it and have it do things for you, like you wouldn't be able to do as a normal user.
A loader stub is a good way to go. The only gotcha is when you have to update the loader; the same initial problem applies (though that should be pretty infrequent).
One problem that I can think of off the top of my head is that you're stepping outside the entire idea of keeping things more "secure." Since your executable exists in a location that should be completely accessible to a non-administrator, it's possible that something else could slam your exe thus subverting security.
You can probably leverage AppLocker. It may only be for Win7 though I'm not running Vista any more. ;)

Best IT/back-office system hacks? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Lots of people have things that their systems do for them or for their teams. Source control post-commit hooks are a standard example: have an automated build system that checks out the latest source, compiles, tests, and packages it is a back-office hack that most of us probably use.
What other cool things have you done?
We had one developer in our team who wasn't familiar with the concept of a subversion conflict. He deduced that if he simply deleted all that weird stuff in his code and clicked resolve that everything was ok (i.e. knocking out all the other changes in the file....)
Regardless to say, after the 5th time this occurred, and the 5th time that I had to explain why that defect that I just closed was reoccuring, I wrote a script.
It would diff for the changes to a file to see whether the consecutive checkin deleted all the previous changes and that they were done by the nameless developer.
It would then send an email to the boss with a description of what happened, and how much work was lost during the checkin.
There was no 7th occurrence.
We have a traffic-light that shows whether our daily build succeeds, has failed tests or simply doesn't build.
Also, we have a light bar that lights up for a few seconds whenever we receive an upload from a customer.
We aren't staffed 24x7 but we have critical processes that run throughout the night. We created an in-house alerts system to notify us of serious system issues, failed mission-critical processes, etc. It uses text-to-speech to create a descriptive message and then connects to our automated dialer to call the appropriate people with the message.
Working at a web design company I configured our dev server so we could see a working copy of a project in real time by a sub domain name. So if your name was joe and you were working on project jetfuel you would go to joe.jetfuel.test-example.com and you could see your changes instantly without committing.
This was a simple hack that used sub domain names as a partial directory structure. Our htdocs path looked like this htdocs/tag/project. We had a script (a php app that you would access by setup.test-example.com) that would create a new tag name for you and checkout whatever version you wanted and call the deploy script for that project. If it succeeded it would forward you to the new sub domain. You could then work on this new copy by a samba share.
This worked really well for us since we always deployed to the same linux build and our projects had simple database requirements.
Our original reason for doing this was because our developers worked on all kinds of different platforms. Besides fixing this platform problem this was awesome for viewing changes and testing. We had all kinds of tags ranging from peoples names, trunk versions, test tags, all the way to prototypes like jquery-menu-hack.jetfuel.test-example.com
Now that I look back I wonder how much easier it would have been to run virtual machines.
We had a dev working on a classic ASP site that didn't believe in source control. The code went from his machine straight to the production box. This lead to issues with lost changes or the inability to revert back to a stable version. Since CruiseControl.Net has the ability to monitor a directory, I added a project that actually checked in files whenever they were copied to production. Completely backward from CC.Net's original intent, but we didn't lose any more code.
Put in a pre-commit hook that checks the bug comment refers to an open bug, assigned to the user doing the checkin. (SCMBug can do this).
Then to make life REALLY interesting, spell check the comments!!
The commit comment, and the one in the code. (spell is my buddy)
Run the code through a code formatter set to compayn standard; and diff it to the original: if it's not in company offical format: reject the commit.
Do a coverage test with the unit test build.
Email all mistakes/errors caused to the development team.
I left OUT the name of the developer. They know they did it.
Not exactly hacks, but a couple of must-haves for IT dev work:
If you're using subversion, you've got to use CommitMonitor. (http://tools.tortoisesvn.net/CommitMonitor) It lets you monitor svn repositories for new commits & then review the new commits. Great if you're wanting to stay on top of what your team is doing. Particularly if you have a couple of juniors that need to be watched. ;)
Rsnapshot (http://www.rsnapshot.org/) is also invaluable - we have complete backup snapshots of our entire filesystem every four hours going back 2 years, and every day beyond that. It's like a data cube for your filesystem! The peace of mind this gives is pure bliss. :)
Hardly a hack, but back in the day, on our speedy VAX 11/730, our overnight process would print the file "BLAMMO.TXT" on the printer if something went amiss. Every morning, the first stop was the printer when coming in.
Back in the dotCom days about 9 years ago, I had to hack a failover system between two different locations. We had a funky setup with a powerbuilder front end website, and powerbuilder managment tool. Data was stored in MSSQL 7.0. The webservers used IPX to communicate to the SQL Servers (don't ask). Anyway, I was responsbile for coming up with a failover plan.
I ended up hacking together some linux boxes, and had them run our external DNS. One at each location. We had a remote site w/ webserver, and sql server I got SQL transaction replication working over a 128k ISDN IPX connection (of all things). Then built a monitoring tool at our production site to send packets out to various upstream network handoffs. If we experienced more than 20% outage the primary site, the monitoring tool ran a perl script on the Debian box to change DNS and point to our 2ndary. Our secondary had a heartbeat w/ our primary DNS, and monitoring station. It would duplicate records unless it lost both connections then it would roll over to pointing DNS to backup location.
The primary site would shut down the SQL server at the primary location to break replication. Automated site to site failover using 128k ISDN IPX connection :)
Back at my previous job, we had to audit many tables for data changes (inserts, updates and deletes). Our support crew had to be able to search through this data to find changes that users made.
The temporary solution that had become semi-permanent was to store each non-select query. However this was a large system, that the table would grow by about 1.5GB a day.
The solution I came up with was to create a script that for all tables in an external list, created the appropriate triggers that audit each table, row, column, before and after, when and by whom and store it in our new audit table. This table grew by about 10% the size of the older version and stored much more usable data. It enabled us to create a UI to search and view every change made to our data, without requiring any knowledge of SQL for our support team or business users.
This is at a lesser level, but I am fairly proud of a make file I wrote for compiling code for my research. It only needs to be given your source and header file names that can take care of the rest all by itself (though it does make the one assumption that you will not be compiling any header files into objects, only source files get compiled). The other downsides are the fact that it relies on the GNU make program's second expansion feature, so I don't know if it works on other make programs. Additionally the compiler used needs to support something similar to gcc's -MM feature. Here is hoping that no one laughs at it.
-include prereqs.mk
HEADERS=$(SRC_DIR)/gs_lib.h $(SRC_DIR)/gs_structs.h
SOURCES=$(SRC_DIR)/main.cpp $(SRC_DIR)/gs_lib.cpp
OBJECTS=$(patsubst $(SRC_DIR)/%.cpp,$(OBJ_DIR)/%.o,$(SOURCES))
release: FLAGS=$(GEN_FLAGS)$(OPT_FLAGS)
release: $(OBJECTS) prereqs.mk
$(CXX) $(FLAGS) $(LINKER_FLAGS) $(OUTPUT_FLAG) $(EXECUTABLE) $(OBJECTS)
prereqs.mk: $(SOURCES) $(HEADERS)
$(CXX) $(DIR_FLAGS) $(MAKE_FLAG) $(SOURCES) | sed 's,\([abcdefghijklmnopqrstuvwxyz_]*\).o:,\1= \\\n,' > $#
.SECONDEXPANSION:
$(OBJECTS): $$($$(patsubst $(OBJ_DIR)/%.o,%,$$#))
$(CXX) $(FLAGS) $(NO_LINK_FLAG) $(OUTPUT_FLAG) $# $(patsubst $(OBJ_DIR)/%.o,$(SRC_DIR)/%.cpp,$#)
Obviously I dropped the definition of a number of variables, but I think it gets the idea across.
Since my coding tools and style are compatible with the requirements of this script I like to use it. All I need to do to add (a) new piece(s) of source code is add its name(s) to the appropriate variable and the rest is taken care of.
We have Twitter accounts for many projects which tweet things like commit messages, notices from builds, failed unit tests, deployments, bug tracking activity - any kind of event associated with the project. Running a client like Twitter Gwibber (which displays a pop-up for each new status) is a great way to stay in touch with the activity on the projects you are interested. Using Twitter is good as you can take advantage of all the 3rd party apps - such as the iPhone clients.
Add commit-hook check for VRML/3d-model files with absolute path to textures/images. f:/maya/my-textures/newproject/xxxx.png just doesn't belong on the server.
Back in the 1993, when source control systems were really expensive and unwieldy, the company I worked about had an in-house source control built as 4DOS scripts. It wasn't as sofisticated as most current source control systems, for example it didn't have branching or integrates, but it did the basic job of supporting revisions history, checkout/checkin and rudimentary conflict resolution.