What is a fair hourly charge for routine site updates? - billing

What do you consider a fair, yet profitable hourly wage for routine updates/management (ie - information, maintenance, database management) for your average site?
What factors do you use to set that rate?
As a reference...I usually quote around $25/hour...am I getting ripped?
EDIT:
Initially I was hoping for this to be a good reference for people in general as well...but since it was asked - I am in the Tucson, AZ area.

It mainly depends on where/who your working for. If you have to update it often with lots of information then charge what you normally would. If it is a one time thing, charge a little bit more than what you normally would.
P.S. When in doubt about how much to charge... overcharge! $$$
;-)

Take the thousands part of what you would consider to be an acceptable annual salary and double or triple it; this becomes your hourly rate. I'd start with tripling it (or more), and going down from there. You're better off to come in with a high quote and work your way down, because (a) raising your rate is a lot harder than lowering it; and (b) the client will feel happier when they've worked you down a bit because they feel like they're getting a deal.
So, $25/hr works out to you being happy making $12k/year.
Start off by telling them that you usually quote $100/hr (or whatever you feel comfortable with), and if they balk at that, follow it up with something like, "But since I really like working for you and want your business, I'll drop down 10% right off the bat."
Don't feel bad like you're overcharging them -- as long as you do good work for what you get paid, both you and your customer will benefit. It's tough to walk into a conference room and ask for what you feel is a shocking amount of money, but this is because most geeks (myself included!) have a habit to think we're worth less than we really are.
And, who knows, you might actually get the higher rate that you quote.

This is kind of an open ended question with a lot of 'well, you could do', but generally speaking you will find a fairly typical formula (plenty of variation in amount of days and hours below, so choose based on your own guidelines):
Take 214 days (work days per year after holidays, vacation, sick time, etc.). Take 8 hours a day. Multiply the two. That's your total work hours per year. Take the amount of money you want to make/feel you are worth per year based on your skillset or market value. Divide that number by your total hours per year. That's your rate.
You can also adjust for profit/taxes, etc. or quantity of work (e.g. a maintenance contract vs. normal freelance hours).
Remember, time is time, regardless of what you are doing.

I think $25 / hr is a very low quote, although this also depends on your experience, what your actually doing, and where you live. I've been with companies that jump at finding a good person to outsource under $50.
What is fair, is the most ammount of money the client is willing to fork over without feeling like they've been ripped off. How do you find this number? Well I don't know. Something I've seen which works preety well is where a company buys a block of hours in bulk, then they can tap the resource at will until the hours are drained.
Edit
Keep in mind, if you have other work which you can make more money off of you need to drop them as a client or raise the rate. Don't raise an existing customers rate too much unless your willing to risk loosing them.
If you do drop them I'd recommend doing it as professionally as possible, you never know what the future will hold

IMO for maintenance I guess it depends a lot on your skills/experience and the responsibilities the client would trust you on. i.e. a wealthy client would probably prefer spending 10x money on something that would be done just perfectly as expected without even have to double check it was done whereas a little company would prefer saving money and spending more time supervising ...
If you want a precise answer you would need to say where you are located, rates are very different from place to place !

Related

GA4 is recording lesser Ecommerce purchase count? (upto 20 to 30% Less) is it normal? What is standard here?

Is it normal to get lesser ecommerce purchase data accuracy in GA4 (even lesser upto 20 to 30%) comparing the data base record? What is standard here? Also please mention the reason of the record and tracking discrepency.
To get the understanding
Well, the most obvious reason is adblockers. They would block your tracking.
The percentage depends on the likeliness of particular audience to block it. We generally expect it to be around 10%, but, as an example, it can reach even 50% when you look at the STEM student traffic.
Other common issue is poorly implemented tracking. Which, in the case of conversions, is more likely to double count the conversions rather than undercount them, but that is possible too.
Finally, GA4's interface is still not a very reliable tool to access your data, so you may take a glance at it in BQ if you're friendly with SQL just to make sure what you're seeing in a GA4 interface is really what's in data.

I am looking for a radio advertising scheduling algorithm / example / experience

Tried doing a bit of research on the following with no luck. Thought I'd ask here in case someone has come across it before.
I help a volunteer-run radio station with their technology needs. One of the main things that have come up is they would like to schedule their advertising programmatically.
There are a lot of neat and complex rule engines out there for advertising, but all we need is something pretty simple (along with any experience that's worth thinking about).
I would like to write something in SQL if possible to deal with these entities. Ideally if someone has written something like this for other advertising mediums (web, etc.,) it would be really helpful.
Entities:
Ads (consisting of a category, # of plays per day, start date, end date or permanent play)
Ad Category (Restaurant, Health, Food store, etc.)
To over-simplify the problem, this will be a elegant sql statement. Getting there... :)
I would like to be able to generate a playlist per day using the above two entities where:
No two ads in the same category are played within x number of ads of each other.
(nice to have) high promotion ads can be pushed
At this time, there are no "ad slots" to fill. There is no "time of day" considerations.
We queue up the ads for the day and go through them between songs/shows, etc. We know how many per hour we have to fill, etc.
Any thoughts/ideas/links/examples? I'm going to keep on looking and hopefully come across something instead of learning it the long way.
Very interesting question, SMO. Right now it looks like a constraint programming problem because you aren't looking for an optimal solution, just one that satisfies all the constraints you have specified. In response to those who wanted to close the question, I'd say they need to check out constraint programming a bit. It's far closer to stackoverflow that any operations research sites.
Look into constraint programming and scheduling - I'll bet you'll find an analogous problem toot sweet !
Keep us posted on your progress, please.
Ignoring the T-SQL request for the moment since that's unlikely to be the best language to write this in ...
One of my favorites approaches to tough 'layout' problems like this is Simulated Annealing. It's a good approach because you don't need to think HOW to solve the actual problem: all you define is a measure of how good the current layout is (a score if you will) and then you allow random changes that either increase or decrease that score. Over many iterations you gradually reduce the probability of moving to a worse score. This 'simulated annealing' approach reduces the probability of getting stuck in a local minimum.
So in your case the scoring function for a given layout might be based on the distance to the next advert in the same category and the distance to another advert of the same series. If you later have time of day considerations you can easily add them to the score function.
Initially you allocate the adverts sequentially, evenly or randomly within their time window (doesn't really matter which). Now you pick two slots and consider what happens to the score when you switch the contents of those two slots. If either advert moves out of its allowed range you can reject the change immediately. If both are still in range, does it move you to a better overall score? Initially you take changes randomly even if they make it worse but over time you reduce the probability of that happening so that by the end you are moving monotonically towards a better score.
Easy to implement, easy to add new 'rules' that affect score, can easily adjust run-time to accept a 'good enough' answer, ...
Another approach would be to use a genetic algorithm, see this similar question: Best Fit Scheduling Algorithm this is likely harder to program but will probably converge more quickly on a good answer.

Year dropdown range - when do we stop?

I attended a payroll software demo yesterday wherein the year dropdowns throughout the software ran from 2000 to 2200. Now, we've all been down this road before with 2 digit shortsight, but honestly - a 200 year service life for a Java & Oracle payroll system? Our Board of Directors would be thrilled if the company was even solvent for 1/4th that long.
When forced to use a dropdown year select, where do you draw the line?
It depends upon the usage. If you're trying to ascertain retirement dates for financial planning, you need to allow users to select years decades into the future. If you're asking for credit card expiration dates, current year + 10 should be more than sufficient. Either way, you would be populating these dropdowns dynamically, lest you desire touching up the user interface every year.
Why not make your app end-user-configurable? Give them a config screen, let them enter a cut-off year as 4 digits and refer to that in the code?
I like to make as much as possible end-user-configurable - it means I can ship one s/w to multiple customers, and it pushes off some tricky decisions to them :-)
The drawback of such a large range is that the dropdown becomes unwieldy - there will certainly be a scrollbar, and it becomes harder to find the year you're looking for.
If it has to handle retirement dates, I'd say 55 years into the future would be sufficient (an 18 year old will probably be retired by 73). My limited experience with such systems precludes me from knowing what a reasonable limit would be otherwise - perhaps you can enlighten us?
Who's forcing you to use a dropdown year select? They're annoying as all hell.
Do a research project showing that typing in a 4-digit date takes less time than using a pulldown big enough to have a scrollbar, multiply the time difference by a vastly inflated estimate of how many people will be using the software, multiply that by a vastly inflated estimate of the pay rate for data entry, and show the company how you can save $18.7 billion over the life of the software.
The oldest confirmed human age is 115. So my bet would be to set it to 120.

How do I work out the cost benefit of optimisation?

I want to figure out how much money I'd save if I optimise some part of my web app. If I save 100 cpu milliseconds over 50K calls to the app, how much electricity is that not using in a day? How about over a year?
I've tried to find some figures thru google, but my googling mojo is failing me at present.
You can't calculate something that specific. You can only conduct an experiment and see what happens.
But honestly I would rather spend time refactoring code for better maintainability and adding new features the customers will like and pay for, so that I won't have to think about electricity.
When "optimizing" it is always important to focus on what you want to "optimize" - in this case, your electricity bill. I would not even bother looking at changing code in an attempt to affect your electricity bill. I would look at the computer's power supply, cooling fans, heat sink, etc. and optimize those things for energy efficiency (buy new, more efficient components). More than likely it will cost less than several hours of a software engineer "optimizing" code for energy efficiency.

How do I calculate the "cost" of a crash? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Background:
Some time ago, I built a system for recording and categorizing application crashes for one of our internal programs. At the time, I used a combination of frequency and aggregated lost time (the time between the program launch and the crash) for prioritizing types of crashes. It worked reasonably well.
Now, The Powers That Be want solid numbers on the cost of each type of crash being worked on. Or at least, numbers that look solid. I suppose I could use the aggregate lost time, multiplied by some plausible figure, but it seems dodgy.
Question:
Are there any established methods of calculating the real-world cost of application crashes? Or failing that, published studies speculating on such costs?
Consensus
Accuracy is impossible, but an estimate based on uptime should suffice if it is applied consistently and its limitations clearly documented. Thanks, Matt, Orion, for taking time to answer this.
The Powers That Be want solid numbers on the cost of each type of crash being worked on
I want to fly in my hot air balloon to Mars, but it doesn't mean that such a thing is possible.
Seriously, I think you have a duty to tell them that there is no way to accurately measure this. Tell them you can rank the crashes, or whatever it is that you can actually do with your data, but that's all you've got.
Something like "We can't actually work out how much it costs. We DO have this data about how long things are running for, and so on, but the only way to attach costs is to pretend that X minutes equals X dollars even though this has no basis in reality"
If you just make some bullcrap costing algorithm and DON'T push back at all, you only have yourself to blame when management turns around and uses this arbitrary made up number to do something stupid like fire staff, or decide not to fix any crashes and instead focus on leveraging their synergy with sharepoint portal internet web sharing love server 2013
Update: To clarify, I'm not saying you should only rely on stats with 100% accuracy, and just give up on everything else.
What I think is important is that you know what it is you're measuring. You're not actually measuring cost, you're measuring uptime. As such, you should be upfront about it. If you want to estimate the cost that's fine, but I believe you need to make this clear..
If I were to produce such a report, I'd call it the 'crash uptime report' and maybe have a secondary field called "Estimated cost based on $5/minute." The managers get their cost estimate, but it's clear that the actual report is based on the uptime, and cost is only an estimate, and how the estimate works.
I've not seen any studies, but a reasonable heuristic would be something like :
( Time since last application save when crash occurred + Time to restart application ) * Average hourly rate of application operator.
The estimation gets more complex if the crashes have some impact on external customers such, or might delay other things (i.e. create a bottle neck such that another person winds up sitting around waiting because some else's application crashed).
That said, your 'powers that be' may well be happy with a very rough estimate so long as it's applied consistently and they can see how it is changing over time.
There is a missing factor here .. most applications have a 'buckling' factor where crashes suddenly start "costing" a lot more because people loose confidence in the service your app is providing. Once that happens then it can be very costly to get users back to trusting and using the system.
It depends...
In terms of cost, the only thing that matters is the business impact of the crash, so it rather depends on the type of application.
For may applications, it may not be possible to determine business impact. For others, there may be meaninful measures.
Demand-based measures may be meaningful - if sales are steady then down-time for a sales app may be useful. If sales fluctuate unpredictable, then such measures are less useful.
Cost of repair may also be useful.