Help building an E-commerce site - e-commerce

Guys, I want to build an e-commerce site on asp.net. My query is: when two users simultaneously buy something, how would the two records get inserted in my database? Would there be a lock? Can anyone explain how this would be or can be handled.
Also, I want to handle peak traffic and also control the average data allotted to each user. I am thinking of using a plug-in. Any suggestions here?

Just a thought but if your just starting a new online store its highly unlikely you'll run into any problems like this for some time?
Secondly, Databases take care of themselves and your web server does all the work of handling traffic and allocating resources. You shouldn't be doing this at a web code level.

The RDBMS handles the locking for you.

Maybe a better solution is a free ecommerce solution, as it'll have a lot of optimisation for large traffic already. Is there a specific reason not to use one of the many solutions already available?

Related

How to Construct a Database to be Used on iOS and on the Web

I am making an iPhone application and Website simultaneously and I want a shared database between the two. I know about some of the options although none of them seem to work perfectly. Like Core Data which will be very nice on the iPhone side, although I havent found a way to access the same information in something other than Obj-c. SQLite is another option, I might not be right, but it supposedly does not work well for servers and being accessed simultaneously from different places. Or I could do XML which seems easy on both sides but it seems like thats the slowest option and will have a huge drag on performance if im going to be reading it from a server all the time.
Any help would be appreciated, and if you know of any other solutions I will be glad to give them a try.
Thanks in advance,
Jordan
Have you thought of using a MySQL backend on your server, and then providing your mobile application with an API?
This takes the hassle out of data synchronisation and also provides you with a good level of sustainability and scalability.
I hope this helps.
Josh.

Need to track unwanted file downloads

I'm looking to track how many people on the web have made my software program file available for download without my permission.
I've thought of searching for my product name and file size to catch possible thieves.
Do you think a web search API is the best way forward?
EDIT: I plan to use the detection data for survey purposes.
There are web analytics companies which could probably help you more than a roll-your-own solution. Consider the companies that big music and film vendors use, or check with the Business Software Alliance.
Ultimately, you are chasing your tail if you are looking to thwart piracy with the results of this kind of activity. However, if you are steadfast that you must try to understand what is going on, you need professional (web analytic) help. There are so many variations out there that you need someone experienced in tracking this kind of information, since you could easily get a false sense of security, or an inflated sense of activity.
No. This ultimately doesn't help you. What are you going to do? Send them emails tellings them to buy your software? Sounds like a spam filter will get it.
I would suggest in prevention rather than detection. A registration or activation process is pretty popular and reasonably successful, though if you have an amazing app it won't stop the really bad people from hacking it, however it will make it much more difficult.

PyAMF backend choices!

I've been using PyAMF to write a backend for a flex app that will request different groups of hundreds of different images depending on what the client needs. I have been using the "simple_server" WSGI server that PyAMF supplies while developing the flex code. Now I'm ready to write a robust backend that will be able to pull images from a mySQL database and send them as fast as possible and as efficiently as possible to many concurrent clients.
The PyAMF documentation is great because they supply many examples to follow, however I am confused about what kind of backend I am trying to create.
Do I want a SocketServer or a WSGI server or something like Twisted or web2py or Tornado? Are these even all different? :) Should I be using Apache modules instead (mod_wsgi or modjy or mod_python)?
I realize that this probably touches on many open debates, so maybe you could just point me to any good summaries of these debates?
Its great to have so many options, but how do I choose?
The short answer is, of course, that it depends on the requirements of your project.
How many concurrent connections is "a lot"?
How much programmer time can you throw at the problem?
How much hardware can you throw at the problem?
...etc...
If you plan to have lots of concurrent clients, it's hard to beat Twisted in the Python world. However, you'll have to deal with your database asynchronously to avoid blocking, and depending on how complex your database interactions are, this can be a bit of a pain. You're basically limited to either using twisted.enterprise.adbapi or coming up with your own twisted-ORM integration.
If you'd rather have "easy" database code (i.e. you want to use an ORM), you're better off going with a (TurboGears/Pylons/plain wsgi) project, probably hosted using Apache and mod_wsgi. This can be a pretty scalable solution, and you get a lot of stuff for free using these frameworks, but it may be more than you need.
I would avoid using one of the many plain python wsgi servers out there (wsgiref, paster, etc.) in production if you really want high performance.
Good Luck!

Track Improvement Requests

We receive 5-10 improvement requests each day from our customers. Some of them are good, and some not so good. I can easily pick out the ones that I agree with, but I'd like a good way to organize the rest, so if we get a lot of similar requests we can prioritize those appropriately.
We have a large backlog of good ideas, so a request usually won't get added to the work queue unless we see a strong demand from customers. This makes it impractical , so it doesn't make sense to track them with our current work item tracking (TFS). The main goal of organizing them is so that we can see where the demand is strongest, and we can determine which features are most important to our users.
Any suggestions are welcome.
I've seen small applications that allow people to add requests and anyone can vote on them. This would show you what people using that particular system are most interested in - although depending on implementation it can be very susceptible to gaming. Look into something like UserVoice. They probably do most of the work for you.
Use a StackExchange site for letting your user's create and vote on their priorities. Another alternative would be using something similar, like UserVoice, though here on SO we've found that the SO platform seems to work better.
You might also want to compile a set of potential feature requests and use something like SurveyMonkey, Vovici/WebSurveyor, or even Google Docs Forms to collect information from your users on which items they would like to see.

Website Hardware Scaling

So I was listening to the latest Stackoverflow podcast (episode 19), and Jeff and Joel talked a bit about scaling server hardware as a website grows. From what Joel was saying, the first few steps are pretty standard:
One server running both the webserver and the database (the current Stackoverflow setup)
One webserver and one database server
Two load-balanced webservers and one database server
They didn't talk much about what comes next though. Do you add more webservers? Another database server? Replicate this three-machine cluster in a different datacenter for redundancy? Where does a web startup go from here in the hardware department?
A reasonable setup supporting an "average" web application might evolve as follows:
Single combined application/database server
Separate database on a different machine
Second application server with DNS round-robin (poor man's load balancing) or, e.g. Perlbal
Second, replicated database server (for read loads, requires some application logic changes so eligible database reads go to a slave)
At this point, evaluating the current state of affairs would help to determine a better scaling path. For example, if read load is high and content doesn't change too often, it might be better to emphasise caching and introduce dedicated front-end caches, e.g. Squid to avoid un-needed database reads, although you will need to consider how to maintain cache coherency, typically in the application.
On the other hand, if content changes reasonably often, then you will probably prefer a more spread-out solution; introduce a few more application servers and database slaves to help mitigate the effects, and use object caching, such as memcached to avoid hitting the database for the less volatile content.
For most sites, this is probably enough, although if you do become a global phenomenon, then you'll probably want to start considering having hardware in regional data centres, and using tricks such as geographic load balancing to direct visitors to the closest "cluster". By that point, you'll probably be in a position to hire engineers who can really fine-tune things.
Probably the most valuable scaling advice I can think of would be to avoid worrying about it all far too soon; concentrate on developing a service people are going to want to use, and making the application reasonably robust. Some easy early optimisations are to make sure your database design is fairly solid, and that indexes are set up so you're not doing anything painfully crazy; also, make sure the application emits cache-control headers that direct browsers on how to cache the data. Doing this sort of work early on in the design can yield benefits later, especially when you don't have to rework the entire thing to deal with cache coherency issues.
The second most valuable piece of advice I want to put across is that you shouldn't assume what works for some other web site will work for you; check your logs, run some analysis on your traffic and profile your application - see where your bottlenecks are and resolve them.
plenty of fish Architecture
some interesitng videos:
Youtube scalibility
Inteview with Dan Farino, System Architect at Myspace
Joel mentioned adding a second datacenter, with the same setup, and then assigning your users randomly to each. Changes to the data are logged and sent from one location to the other, so that both locations contain all the data.
The talk Scalable Web Architectures Common Patterns & Approaches from Cal Henderson (Yahoo) on Web 2.0 Expo was quite interesting. I thought there was an video, but I could not find it. But here are the slides:
http://www.slideshare.net/techdude/scalable-web-architectures-common-patterns-and-approaches
A certain next step would be a cluster of webservers (a web farm) and a clustered system of database servers (replication or Oracle RAC etc. etc.)
If your interested in caching and using .Net, look into the application caching block in enterprise library (of course use this along with the other points above).