MS Project Leveling doesn't work as expected for "As Late as Possible" (ALAP) Taks - schedule

In MS Project, the Leveling functionality doesn't seem to work as I would expect for tasks with the "As Late As Possible" constraint. It seems to try to over-allocate a resource by default to make tasks finish later than they actually need to finish. Instead, I want the leveling functionality to account for other tasks when deciding the start and end dates. It's ok if some of them finish well before the deadline, as long as it's "As Late As Possible", accounting for the other tasks.
Please see the this screenshot for an example:
Leveling Error and Gantt chart
Notice #202 is ok because no other tasks have deadlines before #202 starts. But #201, #200, and #205 conflict. Instead, I would want #200 to finish on 3/12, and #205 to finish before #200 starts (no overlap)
Is there some way I can adjust the settings to get this to work as I want it to

Related

How do I disable "activities" for a custom module on SugarCRM 6.5+ 7+

The title is pretty clear but again: How do I disable "activities" for a custom module on SugarCRM 6.5+ 7+
I have a module containing millions of records and activities has been slowing it down to a breaking point. I managed to stop the activities through some hacking (deleting entries from the cache folder) but I would like to know how to do it the right way so on repair&rebuild + etc things will be normal/ok.
//edit1:
I'm happy to completely disable activities for a limited period of time while my script runs and then enable it again right after if that is possible.
Well, I figured out how to disable activities (activity stream, known in the past as sugar feed I think).
As my problem was running a script on 100k records etc disabling the whole activity stream temporarily in the beginning of the script and then turning it back on in the end was sufficient.
It's quite simple and it feels like an embarrassment I didn't look into the activity stream's source before since in order to disable it a simple:
Activity::disable();
does the job and to turn it back on:
Activity::enable();
There is also a "blacklist" array in the source etc but 1- It didn't solve the problem and 2- It's clearly not upgrade safe etc.

Noob Guidance on a Parallel Task Workflow (without Visual Studio)

This is going to be my first workflow, and I could use a little guidance.
I have a list I'm using for requests when a user needs their profile changed (eg: change of office location). The change has to be done in AD, PeopleSoft, and another database. Right now, I have it set up so requesters submit an item to a list, and Alerts go out to the different people responsible for making the updates in AD, PeopleSoft, etc. However, there has been enough frustration with missed emails and the like that I've been asked to track via workflow.
So essentially, I need to track a request that goes out to multiple users who will then need to confirm that the task has been completed. I found !(http://officeimg.vo.msecnd.net/en-us/files/989/238/ZA102615287.jpg), which is a very good representation of what I want to do, but does a very confusing job of explaining how to do it: http://office.microsoft.com/en-us/sharepoint-help/all-about-approval-workflows-HA102771433.aspx
Can someone point me to the workflow type that I need and the steps to implement? OOB/SPDesigner please, I don't have VS on my machine.
Thanks,
Scott
I will start by saying that implementing parallel tasks in a single workflow is hard.
What you can do is customise the OOB approval workflow (the one mentioned in the article) to suit your needs. This will give you an insight on how Sharepoint Workflows work and are designed.
It will look confusing at start (very confusing) since like i said is a complex workflow to setup, until you start to understand how it works.
make sure you make a copy of the approval workflow before modifying it so you can still use it if needed.

VBA maintaining the program in memory

Sorry I don't know if this is something simple, or even where the problem fits in the greater scheme of programming.
So in my unsophisticated ways, my programs have always been of the scheme: 1. start program, 2. wait while program runs, 3. program is done and gone.
What I am doing now is creating a table from a long list of transactions (10,000s of). The table has several combo boxes for the user to select filters. Right now, every time the user changes a filter, the entire log is re-processed, which takes half a minute or a minute.
What I would rather do is have the trade log held in memory, or somehow latently but more immediately available. But not have the program "spinning" in the background. So the user could go about using Excel unaware that the program is ready in the background in case they want to update the table later, or not.
Does that make sense? If it can't be done in VBA, I'd still be curious how it would be done in another environment, say C#, if it could be. Thanks.
If the frequency of updates to the options trade is low enough you could separate reading and processing the option trades from the filtering process:
Step 1 - Refresh - read the logs and process them, storing the results in global containers (arrays, collections, dictionaries, objects ...)
Step 2 - User requests - show form - user chooses filters - show/store results extracted from the global containers.
There are several options
Firstly, is the code correctly structured? For example, do you really need to re-process everything or can a re-write be more efficient?
If you cannot avoid resource intensive code, notify the user with a progress bar or message. Also consider the use of DoEvents which frees up the operating system so that Excel can process other events.
DoEvents is slow and dirty. Even better look at this link DoEvents is slow!!! Here are faster methods
Rewrite your code to work asynchronously. Create a class, a handler and deal with each transaction asynchronously.
You could write some VBScript/Javascript and push the task out to run independently of Excel/VBA. Eg there's an example Here
Don't use VBA :)
Edit: How are you filtering? If you're iterating through thousands of items in an array testing for criteria it can be very slow. Excel's Advanced Filter is very quick and could process hundreds of thousands of rows with multiple criteria quickly.
When a macro in Excel VBA runs, the user cannot use Excel anymore, running the VBA "stucks" the whole program.
Here are a few tips to find a workaround for your problem :
Keep the vba running : load the data a first time when launching the combobox and then display results to the user every time he asks for but keep a combobox loaded so that vba keeps its context and memory
Load the data in Excel Worksheet, even hidden and then use it when the user asks for some data
Give us more info on what you are doing, from where you are loading the data, how you can cache it, what is your current code, what you tried... so that we can help you more
Regards,
Max

How to verify lots of events in a reasonable way

I am new to software testing. Currently I need to test a middle-sized web application. We have just refactored our codebase and added many event logging logic to the existing code. The event logging code will write to both Windows Eventlog and a SQL database table as well.
The amount of the events is about 200. What approach should I take to test/verify this code refactoring effectivly and efficiently?
Thanks.
I would be tempted to implement unit tests for each of the events to make sure when an event occurs the correct information is passed into your event logging logic.
This would mean that you can trigger one event on the deployed site and verify the data is written to the database and event log. You can have an acceptable level of confidence that the remaining event will be recorded correctly.
If unit testing isn't an option then you will need to verify each event manually, I would alternate between checking the database and the event log as there should be little risk in this area failing. That would mean you would have 200 tests rather than 400 tests.
You could also partition the application into sensible sections and trigger a few events for each section to give you a reasonable level of confidence in the application.
The approach you take will really be determined by how long you have to test, what the cost of would be if an event didn't get logged, and how well developed the logging logic is.
Hope this helps
I would have added tests before you did the refactoring. you dont know where you have broken it already :).
you are saying that it logs into EventViewer and DB, I hope you have exposed logging feature as an interface so that you can:
Extend it to log to some other device if needed
Also makes mocking bit a lot easier
if you have 200 events to test, that's not going to be easy tbh. I dont think you can escape from creating eq number of tests for your 200 events.
I would do it this way:
i would search for all places where my logging interface is used and note all classes and
start with critical paths/ones first (that way you at least cover critical ones)
or you could start from the end, i.e. note down all possible combinations of logs you are getting, maybe point to stale data so that you know if the input is the same, output should be the same too. And every time, regression test your new binaries agaisnt this data and you should get similar number/level of logs.
This shouldn't be to difficult.
Pick a free automated web test tool like Watir (java) or WatiN (.net), (or VS UI Test if you have it.)
Create tests that cover the areas of the web application you expect/need to fire events. Examine the SQL Db after each test to see what events did fire.
If those event streams are correct for the test add a step into the test to verifiy that exactly that event stream was created in the Db.
This will give you a set of tests that will validate the eventing from any portion of your web site in a repeatable fashion.
The efficent & efective part of this approach is that it allows you to create only as many tests as you need to verify the app. Also you do not need to recreate a unit test approach with one test per event.
Automating the tests will allow you re-execute them without additonal effort, and this will really add up over the long haul.
This approach can also be taken with manual testing, but it will be tricky to get consistent & repeatable results. Also re-testing will take nearly as long as the testing uncovers defects that need to be fixed.
Note: while this will be the most effective & efficent way it will not be exhaustive. There will likely be edge case that get missed, but that can be said of nearly any test approach. Just add test cases until you get the coverage you need.
Hope this helps,
Chris

App launch sequencer

Every morning when I get into work I launch about a dozen apps and whatnot (FF, TB, VSx2-3, Eclipse, SSH, SVN update x2-3). Needles to say this does a good job of warming up my HDD for the day. I rather suspect that it would run a lot faster if they were launched sequentially (not to mention that I wouldn't need to click in 17 different places).
Is there a preexisting product that can kick off a sequence of tasks/apps/etc. where each task is only started after the last app is done hammering the HDD?
It would nerd to be able to kick apps like VS and firefox and also be able to trigger explorer context menu items like SVN update in TortoiseSVN.
Try SlickRun, it's free, I've used it for years, I use it constantly and I'd be lost without it.
Think of it like a configurable Start->Run command, it'll do what you want (you can configure n second pauses between multiple commands), and if you install it you'll use it for a thousand different things before the first week is out.
P.S. I have no stake in SlickRun, I just like it :)
Unfortunately, I don't know of any software that can do this for you automatically.
However, can't you trigger the updates through a console SVN task? If so, can't this be done by creating a batch file? It's low tech, and you might want to add a few pauses between each task, but it should do what you want.
As you mention TortoiseSVN, I'll assume your O/S is windows.
You could launch an Autohotkey script at startup. I don't think it can easily detect HDD activity, but you can at least wait until each window appears with the WinWaitActive command.
If each application has an average time they take to complete, you could simply use Windows' Scheduled Tasks application. Obviously you'll need to be running Windows but Scheduled Tasks can be found in the Control Panel.
Execute "Add Schedules Task", select the program, the frequency and then the specific time.