business of software: what is the best ratio of software price to required hardware? - hardware

When selling a software package that requires hardware, typically dedicated hardware (could be a VM), the buyer typically has to buy the server it will run on. So the total cost of ownership (focusing on the capital expense) includes the hardware in addition to the software.
For example, a $3000 bug tracking package might need a $1500 server to run on, total cost is $4500. The hardware is 50% of the software cost, or 1/3 of the total cost.
Of course, with open source packages, the ratio is inverted.
So the question is: does it matter? At what point does hardware expense affect the sale of the software?

Why require hardware at all?
If hardware prices are a big breaking point in your sale, perhaps offer a hosted solution and factor the price into this service.
"Software As Service" might really help you makes some sales for customers with limited infrastructure.

The ratio depends on which part of the equation is the commodity part.
If you are selling a software targeting solving complex problems like air traffic control that can run on any servers, you might want to sell it packaged with the hardware for a bit more, but since the hardware is the commodity and can be obtained from other vendors, the price ratio will be heavily skewed towards the software.
If on the other hand you are OEM and your goal is to sell your hardware, you can use the software as the commodity to bring more value to your offering. For example, you can sell high-endserver machines and add a preconfigured LAMP stack to make your offering better. In this case, the price is heavily skewed towards the hardware.

"typically" - that's an assumption.
And the cost can be more than just hardware and software if there are service or renewal fees involved. This can be true of open source as well, because a lot of companies like to pay for indemnification and services.
If you buy hardware plus operating system, the decision to go with Windows or Linux comes into play.
I doubt very much that there's a meaningful, fixed ratio. It's an interesting question, though. You'd need a LOT of data, and I don't see that it's been gathered into one place.
I had another thought in a comment below that's worth surfacing. There's another consideration that's becoming more important all the time: power consumption. Some corporate data centers can't add a new server without retiring an old one or by reducing power consumption some other way. Being able to deploy a new software purchase on an existing server is big plus. If it can be virtualized, even better.
So it's not always hardware and software. Other economic considerations, like power cost, have to figure in.

I have found that hardware costs rarely affect the decision.
Small companies can get away with reusing servers.
Large companies already support clusters of servers, increasingly with VM capability so it's easy for them to deploy/redeploy software to as much hardware as necessary.

Related

Does Allen Bradley SLC500 PLC worth to buy in 2017?

I am looking for a plc system for our brewery. I would like buy a second hand PLC with the necessary modules. I have seen the AB SLC500 1747-L542 cpu for a good price (120$) with a lot of modules, but I dont know, if it is new enough for a project. (Windows compatibilty, programming environment, etc)
Should I buy it, or it would be a bad decision? If it is not a good decision, what do you suggest for me? I have seen Siemens S7-200, Siemens ET 200 and others too.
Thank you.
If you want to go cheap, use something from automation direct or ez automation. You not only need a CPU, you need I/O cards, rack, power supply, software & HMI. That's going to be a ton of money up front. With the two vendors I mentioned, they bundle most of that for a much lower cost of entry.
Yes this is certainly new enough to use. However, you will need an entire rack. For instance, a ethernet, devicenet, or io cards to connect the processor to your components.
Also as Bill J mentioned, AB may be industry standard in America, but it is expensive. Depending on your brewery's income it my not be smart. Siemens is the same idea.
Quote from AB's website
Our Bulletin 1747 SLC™ 500 control platform is used for a wide variety of applications. Rockwell Automation has announced that some SLC 500 Bulletin numbers are discontinued and no longer available for sale. Customers are encouraged to migrate to our newer CompactLogix™ 5370 or 5380 control platforms.
link to website
So I would say, for a new project, no it's not worth bying in 2017.
Depending on how many points you need to use I would recommend going with the CompactLogix or MicroLogix from AB. The lowest CompactLogix is my favorit for all around tasks, I have standardized the whole plant to use it as the lowest level PLC for the simplest machines. Built in you get Ethernet capability, 16 inputs, and 16 outputs. You can expand the controller via different modules (up to 8 for the lowest PN), that can include additional discrete IO, analog modules etc.
Do not use a SLC as they are obsolete and even though you can get it to work without much trouble this is not a good choice for a new project.
It is hard to say what you need exactly without knowing the specifics of your project so I would recommend using the "integrated automation builder" (a free download from AB) to properly size a controller for your needs.

Testing with 100's of devices

We are building software intended to communicate with 100's or 1000's of PCs. Unfortunately we do not have the means to set that many devices up. We don't have that many physical devices, and we don't have enough infrastructure to support that many virtual devices either. We are looking to test with high volumes of PC's, combined with other factors such as network latency. Are there services or other ways we can achieve this level of testing?
There are cloud load testing services out there that might do what you want. A few I know of off hand are LoadStorm and Load Impact. Quite a few others will turn up with a search for something like "cloud load testing". This could be an easy option. There would be some cost, but it wouldn't be too high.
If you want to roll your own solution for free, a lot of infrastructure-as-a-service providers offer a free tier for new users. Amazon EC2 and Microsoft Azure both offer 750 hours of their smallest instance a month for free. While this is usually used for a whole contiguous month of server time (24 hours * 31 days), you could also use it to spin up 750 servers for an hour, once a month. Spread them across all the different regions/data centers available to maximize variance in networks and latency.
You could also consider writing a testing tool using a language with good concurrency support, or with a light footprint, so that you can fire up several hundred threads/processes at once, and then run your tests on relatively few servers. It wouldn't be quite the same as 1000s of different IP addresses all at once, but 4-5 servers each fielding a few hundred clients might be enough to satisfy your testing needs.

how can build single board computer like Raspberry Pi for run OS?

my question is : how can build single board computer like Raspberry Pi for run OS ?
user ARM micro processor and debian arm os , can use USB and etc.
like raspberry pi and other single board computer
i search but find nothing for help me !!! :(
The reason you can find nothing is probably because it is a specialist task undertaken by companies with appropriate resources in terms of expertise, equipment, tools and money.
High-end microprocessors capable of running an OS such as Linux use high-pin-density surface mount packages such as BGA or TQFP, these (especially BGA) require specialist equipment to manufacture and cannot reliably or realistically be assembled by hand. The pin count and density necessitates the use of multi-layer boards, these again require specialist manufacture.
What you would have to do if you wanted your own board, is to design your board, source the components, and then have it manufactured by a contract electronics assembly house. Short runs and one-off's will cost you may times that of just buying a COTS development or application board. It is only cost-effective if you are ultimately manufacturing a product that will sell in high volumes. It is only these volumes that make the RPi so inexpensive (and until recently Chinese manufacture).
Even if you designed and had your own board built, that in itself requires specialist knowledge and skill. The bus speeds on such processors require very specific layout to maintain signal integrity and timing and to avoid EMC problems. The cost of suitable schematic capture and board layout software might also be prohibitive, no doubt there are some reasonably capable open source tools - but you will have to find one that generates output your manufacturer can use to set-up their machinery.
Some lower-end 8 bit microcontrollers with low pin count are suitable for hand soldering or even DIP socketing, using a bread-board or prototyping board, but that is not what you are after.
[Further thoughts added 14 Sep 2012]
This is probably only worth doing if one or more of the following are true:
Your aim is to gain experience in board design, manufacture and bring-up as an academic or career development exercise and you have the necessary financial resources.
You envisage high production volumes where the economies of scale make it less expensive than a COTS board.
You have product requirements for specific features or form-factor not supported by COTS boards.
You have restricted product requirements where a custom board tailored to those and having no redundant features might, in sufficient volumes be cost-effective.
Note that COTS boards come in two types: Application modules intended for integration in a larger system or product, and development boards that tend to have a wide range of peripherals, switches, indicators and connectivity options and often a prototyping area for your own use.
I know this is an old question, but I've been looking into the same thing, possibly for different reasons, and it now comes up at the top of a google search providing more reasons not to ask or even look into it than it provides answers.
For an overview of what it takes to build a linux running board from scratch this link is incredibly useful:
http://hforsten.com/making-embedded-linux-computer.html
It details:
The bare minimum you need in terms of hardware ( ARM processor, NAND flash etc )
The complexities of getting a board designed
The process of programming the new chip on the board to include bootloaders and then pointing them to a linux kernel for the chip to boot.
Whether the OP wishes to pursue every or just some of these challenges, it is useful to know what the challenges are.
And these won't be all of them, adding displays, graphics and other hardware and interfaces is not covered, but this is a start.
Single board computers(SBC) are expected to take more load than normal hobby board and so it has slightly complicated structure in terms of PCB and components. You should be ready to work with BGA packages. Almost all of processors in SBCs are BGA (no DIP/QAFP). Here is the best blogpost that I recently came across. Its very nicely designed and fabricated board running Linux on ARM processor. Author has really done a great job at designing as well as documenting the process. I hope it helps you to understand both hardware and software side of SBCs.
A lot of answers are discouraging. But, I would say you can do it, as I have done it already with imx233. Its not easy, its not a weekend project. My project link is MyIMX233.
It took me about 4-5months
It didn't cost me much, a small fine tip soldering iron is what I used.
The hard part is learning to design PCB.
Next task would be to find a PCB manufacturer with good enough precision, and prototyping price.
Next task would be to source components.
You may not get it right, I got the PCB right by my 3rd iteration. After that I was able to repeatedly produce 3 more boards all of which worked fine.
PCB Design - I used opensource KiCAD. You need to take care in doing impedance matching between RAM and processor buses, and some other high speed buses. I managed to do it in 2 layer board with 5mil/5mil trace space.
Component Sourcing - I got imx233 LQFP once via mouser, and once via element14.
RAM - 64MB tssop.
Soldering - I can say its easy to mess up here, but key is patience. And one caution don't use frying pan and solder past to do reflow soldering. I literally fried my first 2 processors like this. Even hot air soldering by a mobile repair shop was also not good enough.
Boot loading image - I didn't take much chance here, just went with Archlinux image by olimex.
If you want to skip the trouble of circuit designing between RAM & processor, skip imx233 and go for Allwinner V3S. In 2017/2018 this would be easiest approach.
Bottom line is I am a software engineer by profession, and if I can do it, then you can do it.
Why not using an FPGA board?
Something with Zynq like the Zybo board or from Altera like the DE0-Nano SoCKit.
There you already have the ARM core, memory, etc... plus the possibility to add the logic you miss.

How to bench software on multiple graphic cards with cluster

I would like to bench my software on multiple graphics cards without buying any existing ones!
Do you know any service providers for that?
I wonder if there is something like http://www.keynotedeviceanywhere.com/ for desktops. I guess the game industry might use this kind of service to test their graphics engine...
Thanks
What about http://www.gaikai.com/ they render in the cloud.
Hate do answer your "how do I do this?" question with "don't do that", but...
Don't do that :)
I suspect most of the kind of people who want to test their graphics software on multiple hardware can afford to buy all the multiple hardware, so there isn't much demand for the sort of thing you are asking, particularly not if you want it for free.
If you want to do stuff on the cheap, try:
Aim low. Get a $30 card or two and test on those. This will cover 99% of your potential user base.
Deal with the major brands. For desktops, this means ATI and nvidia. Not many other brands to worry about here, except intel if you want to stretch that far.
If you are brave enough, publish your benchmarking software on the web somewhere and ask other kind users to test it and submit results.

Working around development constraints in customer policy

As described before, I work in IT consultancy and move through various customer environments. It is natural to encounter a variety of security policies, and in most environments we have had to go through a security checklist before authorizating our laptops - our mobile development workstations - for connection into their network (most of the time just development network).
There is this customer who does not allow external computers to connect to their network, so our laptops are.... expensive communication computers with mobile GSM modems. We are forced to use their desktop PCs for development, and those workstations are pretty old models with low RAM and single-core Pentium 4 CPUs and cranky disks. Needless to say, development work is sub-optimal, especially when working with Visual Studio solutions that can range 100 - 400 projects.
For small cases that can be isolated, we develop and test on our own laptops. But for the bigger cases, given that certain development servers like SeeBeyond and mainframe DB2 databases are only on the network, and the prospect of copying hundreds of projects to and fro machines is just ghastly, it does not seem like a technically sound idea.
I am not asking for tricks that violate the customer's policies (e.g. plug laptop in masquerading desktop MAC address). I just like to know what others have tried to retain some of their advantage and efficiency with their own hardware when working in such environments. Whenever I can I try to duplicate the environment with virtual servers on my own laptop, but it only goes so far with Microsoft-only server solutions. Virtualizing non-Microsoft server and software is a challenge.
That's tough. The root cause here is management that doesn't understand that there are real cost implications to their choice of environments.
Your problem is that while you may be billing by the hour, you probably aren't getting paid that way, so your customers' wasted time goes into the pockets of your boss and not to you. A lot of times, this presents a mild conflict of interest. Your company has about zero incentive to speed up your work, and your client doesn't want to make an infrastructure investment in what they see as a temporary engagement.
All I can say is that you have to run this up the flagpole with management. You have to show them that this is taking real time from the projects which could put your deliverable dates at risk, or worse, the reliability of these machines is such that it puts the delivery of the end product at risk as well. The onus is on you to make your management into a believer.
A gig of RAM at Crucial is thirty bucks. If nobody is willing to shell out 90 big ones for 3GB of RAM for your box, you have management that's actively working against you or does not respect you. If it comes to that, you've got bigger problems and need to look for your next employer.
One of the things that I did when I upgrade my current development environment was find links to productivity studies that showed how much productivity increased when the development environment was enhanced. In my particular case it was going from 2 to 3 monitors on my desktop. I was able to find 3-4 articles that described how much was gained by having the extra monitor. It seems self-evident to me that you'd want a newer, well-configured system for developers, especially since the cost of the hardware relative to the cost of the people is so small these days, but the bean counters often think differently. If you can go in armed with some industry studies that show productivity gains, I think it will be harder to dismiss your concerns as just complaints about the environment.
FWIW, I was disappointed to have to do the research for an upgrade that cost less than what the department would spend on paper in a month, but sometimes you have to do things that make no sense to you because it makes sense to someone else.
Write a decent proposal to your manager, that's about all you can do to rectify the solution. If he is unwilling or unable to fix the problem, or unwilling/unable to pass the proposal up to someone who can, then I'd say the current situation is what they've decided to use.
In that case, either live with it, or don't, ie. move on.
The proposal should contain:
A proposal for what you want done
Why it should be done
The consequences of doing it
And most importantly, the consequences of not doing it
List things like longer development time, or less testing, or less time to write quality code. Basically, a minor upgrade that doesn't cost much will improve the quality of the product tremendously.
I just went through this and found a pretty good solution : get a different job
Just synchronize incrementally. You're not typing that much code/second a gsm connection cannot keep up with it? Make sure your projects are setup to use mocks/stubs whereever possible.
Setting this up probably is beyond the capability of the systems administrators of your customer.
The dependency on the big databases should be reduced so you only need to run daily regression tests.