How do you automate some routine actions for improving productivity? [closed] - automation

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Every morning, after logging into your machine, you do a variety of routine stuffs.
The list can include stuffs like opening/checking your email clients, rss readers, launching visual studio, running some business apps, typing some replies, getting latest version from Source Control, compiling, connecting to a different domain etc. To a big extend, we can automate using scripting solutions like AutoIt, nightly jobs etc.
I would love to hear from you geeks out there for the list of stuffs you found doing repeatedly and how you solved it by automating it. Any cool tips?

I use Linux. I have a bunch of scripts that do anything I want. Typically I write a script whenever a "block" of work can be reused in the future. For example, simple refactorings, deployments, etc...
Over time I started to combine these blocks, hence getting ever more efficient.
Regarding the "load stuff at startup", under Linux that comes out of the box (you can "save your current session" when you log out or turn off the computer).
On windows, my suggestion is to use programs that can be automated via command line.

A favorite way is to leave the computer on at night or better, if it's a laptop, put it to sleep. Running a web browsing virtual machine in VMware or similar works also, you can set the VM start along with the machine and save its state on shutdown, so your web pages and email client stay open. This works for development also if you're doing scripting or something similar where the performance hit of the VM on large compiles won't negate the benefits.

SlickRun is very handy for this, just a few keys to navigate to anything common and a very small footprint. With input variables and file path recognition all part of it I can quick remote desktop to any machine, search anything, pull up whatever's needed.

On OS X, I have an Applescript that I run at the beginning of the day. It sets an away message on IM, hides or quits programs that would distract me, gets new mail, and so forth. I also plug in my USB backup disk, so when I'm going home, another script ejects it and quits some programs. When the script is done, so am I.
I invoke these scripts with key combos using Quicksilver.
If you don't have a Mac, by the way, Quicksilver and Applescript are probably the #1 and #2 reasons to switch. Between the two of them, you can tell your computer to do practically anything you want in very short order.

Use a good app launcher such as Quicksilver or Launchy to cut down on the time it takes to perform simple tasks. They're usually not scriptable, but they do let you do each step faster.

Writing shell scripts (Applescript, Bash, PowerShell, etc..) is a great way to automate most mundane tasks, assuming your apps are scriptable, as well as pick up a new language. As you venture further into this practice, you'll find yourself more and more annoyed at the apps you use that aren't scriptable, to the point where it starts to affect your choice of apps ;-)
Also, consider a cron job, Windows scheduled task, or similar OS X analog to automatically run certain tasks at certain times of day/week/month/year. You can use this for anything from the "workday morning" scripts mentioned previously, to reminding you of your wife's birthday and anniversary every year. There's some more info here for *NIX systems, or here for Windows boxes.
Happy automation!

I have a hard time wrapping my head around Applescript, but since Apple runs BASH scripts just fine, I just use those instead. I've got a development server on my mac, so I've got a script that I can run to create a new site directory, create a new virtual host in apache, add a new domain to my /etc/hosts file, etc.
It's especially cool to integrate Bash (or probably applescript, although I don't know how) with Growl. That way, you can put a nice message up on the screen, complete with a png icon. This is more useful for things that your scripts do during the day though.

I do most of my programming work on a development server at work, so in the evening I simply detach my screen session and re-attach it in the morning, so it takes just a few seconds until I'm exactly where I left the day before.
I have some macros defined in mutt to clean up my inbox (archive commit mails etc.), I have a script that mounts some directories on the development server on my notebook via sshfs (works without interaction using public keys), and after that all I have to do is start up a browser and get a coffee. :)

Related

How to test a VB program which will run on a network

I am a self-taught programmer and have only delved into new areas of programming as the need arises. I have never done any network programming, everything I have written has been for a single computer. I have written a program for an old board game and it runs great. But, now I want to try to write it to run for multiple players across a local network. I have an idea of what has happen in terms of constantly checking a specified folder/file for changes. But... how do you test this without actually building/compiling the program and installing it on another computer every time you make any changes? I have tried to search various forms of what I have as the title here, but all that comes up is about testing network connections, or socket programming (would this be easier/needed) or systemfilewatcher (which may be an option too if it will run on Windows 7 and 10... but, I find nothing about testing programs to actually access the network and simulating 2 copies of the program running. Any suggestions, links, etc. would be greatly appreciated.
I think you will be disappointed in the performance of a file-based network game unless reaction or refresh time is of little consequence for your "board" game. You may also need to work out potential concurrency issues (ie, someone updating a file you've just read). If you have any desire to do other games in the future you should be using sockets (most likely UDP unless you have a good reason not to) to create a client server system.
As to your question, yes, you should be able to test it. You just need to run both a compiled exe and the source in VS debug mode, accessing the same folder on your drive. If you go with the socket-based option, you would use your PC's loopback address 127.0.0.1 (sometimes known as localhost), but the 2 different parts will need to communicate on different ports.

How to write a BIOS program that runs before the OS?

I am trying to write a script that is executed before Windows is booted on my Computer.
I have already developed a simple Linux bash script to accomplish this, but I would like to improve and take it further. The problem with using a Linux script is that it adds quite a bit of time to the boot sequence as Linux has to start/initialize then execute it, which is obviously undesirable.
What I would like to do is write a low-level program (assembly? machine code?) which BIOS would read, execute, and then continue to Windows(or any other OS).
Is there a way to run scripts in that fashion without the presence of a OS and if so what language or resources should I consider.
No, there is no way to write "scripts" to do what you describe.
It is, however, possible to create a chain loader and have the chain loader do anything you want it to in the first stages before loading Windows. The accepted answer to this question will get you started down this road.
You would want to consider Assembly Language. You didn't specify an architecture, but saying "Windows" implies x86 or x86_64.
All of that said, I suspect your question will be closed since it is enormously broad.

Software testing with version control

So, as a developer, you probably write a small amount of code, and then test it to see if it works before you move onto something else. This is because you don't want to write thousands of lines of code and find that doesn't work. Stating the obvious here. So myself and a few others(soon) are working on a php application where I want to implement some form of version control, most likely subversion since we all know how to use it, somewhat. My question is how do I implement this writing process stated above with writing, and then testing.
My idea was to set each developer up with their own workstation including a web server, and php/mysql etc.. so they can checkout the repo and then test on their own computer as they are writing. I'm really looking for some direction here with that. Currently we aren't using version control as there are only two developers and we simply use a shared directory thats located on the web server. When we make changes, we can view them immediately on the web server. Any input on this? Whats the best way to handle multiple developers when in the development process of an application?
There are a number of different ways to approach this:
1) Each developer has a whole web server stack on their machine, deploys to it, and tests there, then checks in working code.
2) There's a separate test/integration machine. Developers take turns deploying to that machine, do their testing, then check in working code.
3) You use branches in Subversion. Development happens on a branch, and it's OK to check in broken code on a branch. There may be a branch for each developer, or a branch for each feature, or whatever. The developer checks in code onto the branch, checks it out on the separate test machine, tests, fixes, then checks the working code onto the trunk.
Which one is right depends on how big your team is and how complex your server setup is. Choose one that makes sense for your team.
You'd need to start thinking about a build server, using a piece of software like cruisecontrol which monitors for source code changes and is then able to build, run tests and deploy you're code (ed: in a manner as close to live like as possible!).
I'd highly recommend integrating a build server as soon as possible, otherwise you'll find out down the line that automating something that somebody has been doing manually for the last 5 years is somewhat difficult :)
You might also find that each dev ends up with their own methods of deployment and custom environment, it'd be far better to centralise this in one place and then have other devs use the scripts and from that deployment if they want to run the same process locally.
Configuration management is something you want to get right to begin with!
One important thing with CI: You only want to push working code to the central repository. This requires a private repository for each developer, but has the advantage that you never break trunk.
Git and Mercurial are the most obvious tools, and can work with svn as a central repository.
To prevent merge conflicts, there's one trick to prevent pushing broken code to central: always pull/merge from central first, and frequently, prior to pushing:
http://martinfowler.com/bliki/FeatureBranch.html
And have a look at our sponsor: http://hginit.com for examples of workflows with multiple developers.

How can multiple developers efficiently work on one force.com application?

The company I work for is building a managed force.com application as an integration with the service we provide.
We are having issues working concurrently on the same set of files due to the shoddy tooling that is provided with the force.com Eclipse plugin. If 2 developers are working on the same file, one is given a message that he can't save -- once he merges he has to manually force the plugin to push his changes to the server along with clicking 2 'Are you really sure' messages.
Basically, the tooling does a shoddy job of merging in changes and forces minutes of work every time the developer wants to save if another person has modified the file he's working on.
We're currently working around this by basically 'locking' individual files by letting co-workers know who is editing a file.
It feels like there has got to be a better way in this day and age. Does anyone know of a different toolset we could use, process we could change, or anything we can do to make this easier?
When working with the Force.com platform my current organisation has found a number of different approaches can work depending on the situation. We all use the Eclipse Force.com plugin without issues and have found the following set ups to work well.
We have a centralised version control system that we deploy from using a series of ant commands to a developer org instance. We then depending on the scope of the work either separate it off into chunks with each developer having their own development org and merging the changes and testing them regularly, or working in a single development org together (which if you have 2 developers should be no major problem) allowing you to have almost instant integration.
If you are both trying to work on the same file you should be pair programming anyway, but if working on two components of a similar system together, sharing the same org can allow you to develop in a fast and flexible manner by creating the skeleton of the system you wish to use and then individually fleshing out the detail.
I have used both methods extensively and a I say, work really well depending on the situation.
Each developer could work in separate development sandbox (if you have enterprise edition, I think 10 sandboxes with full config & limited amount of data are included in the fee?). From time to time you would merge your changes (diff tool from any version control system should be enough) and test them in integration environment. The chain development->integration->system test->Q&A-> production can be useful for other reasons too.
Separate trick to consider can be used if for example 2 guys work on the same trigger. I've learned it on the "DEV 401" course for Developers.
Move all your logic to classes. Seriously. They will be simpler to unit test too.
Add custom field (multi-select picklist) to User object. Values should be equal to each separate feature people are working on. It can hold up to 500 values so you should be safe.
For User account of developer 1 set "feature1" in the picklist. Set "feature2" for the other guy.
In the trigger write an if that tests presence of each picklist value and enters or leaves the call to relevant class. This wastes 1 query but you are sure that only the code you want will be called.
Each developer keeps on working in his own class file.
For integration test of both features simply set the multiselect to contain both features.
I found this trick especially useful when other guy's code turned out to be non-optimal and ate too many resources. I've just disabled his feature on my user account and kept on working.
This trick can be to some extend applied to Visualforce pages too (if you can divide them into components).
If you don't want to waste query - use some logic like "user's first name contains X" ;)
We had/have the exact same problem, we have a team of 10 Devs working on a force.com application that has loads of apex classes (>300) and VF pages (>300).
We started using Eclipse plugin but found it:
too slow working outside of the USA each time a save is called takes > 5 sections
to many merge issues with a team of 10 developers
Next we tried developing in our own individual sandboxes and then merging code. This is ok for a small project but when you have lots of files and need changes to be pushed between sandboxes it becomes impossible to manage as the only thing worse then force.com development tooling, is force.coms deployment/build tools. No automation its all manual. No easy way to move data between sandboxes either.
Our third approach was to just edit all our VF pages and Apex code in the browser. (not using their embedded editor that shows up in the bottom half of the page because that is buggy and slow) but just using the regular Editor under setup > develop > Apex classes. This worked ok. To supplement this we also had a scheduled job that would download all our code and save it into our SVN repository. We also built a tool that allow us to click a folder on our desktop and zip its contents and deploy it as static resources for us.
However this approach still has its short comings, i.e. it is slow and painful to develop in the cloud, their (salesforce) idea of Development As a Service is crazy. Also we have no real SCM we only have it acting as backups.
Bottom line is force.com is a CRM and not a Development platform, if you can? run, flee, get away from it as fast as you can. Using it for anything apart from a CRM is more trouble then it is worth. Even their Slogan "No Software" makes me laugh everytime
I'm not familiar with force.com, but couldn't you use source control and pull all the files down from force.com into your repository. Then you could all do your work, and merge your changes back into the mainline. Then whenever it's necessary push the mainline up to force.com?
Take a look at the "Development Lifecycle Guide: Enterprise Development on the Force.com Platform". You can find it on developer.force.com's documentation page.
You might want to consider working on separate static resources and pages and then just being careful when editing objects, classes etc.. If most of your development happens on client side code (page, staticresource, lightning component/app) you might be interested in this project: https://github.com/bvellacott/salesforce-build . In any case I strongly suggest using version control. If not on a server then at least locally on your machine, in case your peers overwrite your work.

are there best practices or tricks for indexing/monitoring a drive for files?

I need to find and monitor all the photos on a hard drive or a folder for a photo organizer. Currently I'm doing this naively: recursively traversing, manually marking folders as indexed, and repeating that process to catch when photos are added or moved.
The problem is with a large enough folder tree this is very expensive, so I'm looking for tips to do this differently and/or tips on keeping it a low cpu process.
Ideally solutions would be not platform-dependent.
EDIT: I'm using xulrunner currently, but could compile a module do platform specific stuff.
What about the first run? Is there no solution (even platform-dependent) besides running through the entire folder tree manually.
Ideally solutions would be not platform-dependant.
Impossible. The Win32API has FindFirstChangeNotification, Linux has inotify (and others), Mac OS X has FSEvents, et cetera. This is stuff that's very low-level, and no OS does it the same as any other OS. If you want something cross-platform, you have to find an API with several backends that works on the platforms you want, but if there are any of these, I haven't yet found them.
I don't know of a way to do this in a platform independant way, but on Linux I'd hook into inotify to call something when a file gets added or updated. You could even use inotify-tools to run a script when that happens, so you don't have to be running all the time to capture all these events if they're infrequent. Just have the script update the database, and optionally notify your gallery/display program if it's running.
Are you coding on .NET? If so, you could use the FileSystemWatcher class instead.
Why not user a filewatcher program, which will notify you of changes in particular folder trees?
If you want to write your own you could use the FileSystemWatcher class to do it.
One answer as of 2014 is facebook's watchman: https://facebook.github.io/watchman/
A couple of years ago I ported some functions of Windows API to Linux like (FindFirstChangeNotification, FindCloseChangeNotification ...) it has some limitations but for what you need it could be enough, please take a look at: https://github.com/paulorb/FileMonitor