It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Why pull up and pull down resister is connected to the pin.
how to configure pin as pull up and pull down or as interrupt souce.
For an output, it gives the pin a defined logic state when the GPIO is in the reset state which is normally a high impedance input, so is not driving the output to a valid logic state.
For an input the need for it is determined by the attached device, which may also be high-impedence or "floating" on start-up, in which case the pull-up/down will ensure a valid level.
Devices with open-drain/open-collector outputs will need a pull-up/down.
You will need at least a basic understanding of electronics to be successful in embedded systems development (unless all you need happens to be on one off-the-shelf board without modifications or additions. Get yourself a copy of Horowitz & Hill's The Art of Electronics or similar.
Some devices by design can only drive a 1 or drive a 0 and some can drive both. Where you would use something like that is a shared line like spi for example, if you are the device being addressed you pull zero, only one device pulls a zero at a time. Since the line is either floating or zero to make it a one the rest of the time you need a pull up resistor, think of it as a ball or something on a spring the spring keeps the ball up near the ceiling, when you need the ball on the ground it is pulled down against the spring, the spring is fairly weak. when the ball is released the spring pulls it back up to the ceiling. Similar thing on these kinds of busses.
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I'm working on a project an since it was my first attempt on ef and i wasn't quite sure how to go on about certain things like calculation i used SQL calculated field to handle row sums (quantity * price) and some SQL Views that calculate Sum()
in the old days I would fire up a view or a stored procedure
UPDATE
I think what was being asked was whether to use a hybrid solution to calculate rows inside SQL or whether to shift it to code and handle that on webserver?
result
I carried on using sql server option, however moving to nosql I now know I should have had it handled by the application's code as that would make it more versatile
My experience is that databases are extremely fast at this sort of thing since they are heavily optimized for these operations.
Also, consider the alternative. Say you want to calculate SUM(quantity * price) for a 50,000 row result set (nevermind the table size, let's just say your query was 50k rows big). If you did it in your app you'd have to retrieve 50k pairs of integers (let's just assume they're ints, and 32bit at that - which they probably aren't). So that's 64bits per row, and 50k rows, gives us 3.2M bits, or 400KB of data.
Now the DB has to spit that to your app over the network, your app has to wait for the data, read it all into some data structure, and then iterate over it. The network transfer time for that operation is going to obliterate any savings you might have had by doing the calculation in your app.
By contrast if the DB does the sum you have to transfer a lot less data. You may also have some benefits if you're able to cache queries on the DB size (this is vendor specific, of course).
In a nutshell - unless you have a really good reason to do this sort of thing in your app, just leave it to the DB and save yourself the headache.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
My first goal would be to detect a single individual entering/leaving a room.
Then a few individuals entering/leaving a room at the same time.
Finally, if it's possible, detect people on wheel chairs.
Is this feasible with the Kinect SDK?
The Kinect SDK delivers a skeleton array. You can count the players in that array.
Besides that there is a SkeletonFrameReady Event that occurs if a skeleton is detected and something changed.
So you can detect people.
You can detect multiple people, but I believe a maximun count of 4 at one time.
I don't know about the wheel chair. I guess it is possible.
As juergen d said, it's absolutely possible to detect people entering / leaving the Kinect Sensor's field of view as long as it's not more than four at the same time. Also, the sensor definitely recognizes people sitting in a wheelchair, although their skeleton data might be useless apart from the upper half of their body (leg joint data will be messed up for the sensor) - but since you only need to see if there's anybody in the room, you should be fine.
Edit: From the 1.5 version on (released in May 2012), the Kinect SDK explicitly supports seated application scenarios by offering a "seated" mode (from Kinect Blog: What's new in 1.5):
Provides the ability to track users’ upper body (10-joint) and overlook the lower body if not visible or relevant to application. In addition, enables the identification of user when sitting on a chair, couch or other inanimate object.
You could if you had multiple Kinects hooked up to your computer. As far I know each Kinect can only detect 6 people, and track 2 at a time. And since my Kinect can detect me crawling on the floor, I would think it would be able to detect a person in a wheel chair. Now the new version has a SkeletonTrackingMode which has a Seated member.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Please let me know which tool (GNU or 3rd party tool) is the best we can make use for profiling and optimizing our code. Is gprof an effective tool? Do we have dtrace tool ported in Linux?
You're not alone in conflating the terms "profiling" and "optimizing", but they're really very different. As different as weighing a book versus reading it.
As far as gprof goes, here are some issues.
Among profilers, the most useful are the ones that
Sample the whole call stack (not just the program counter) or at least as much of it as contains your code.
Samples on wall-clock time (not just CPU time). If you're losing time in I/O or other blocking calls, a CPU-only profiler will simply not see it.
Tells you by line-of-code (not just by function) the percent of stack samples containing that line. That's important because any such line of code that you could eliminate would save you that percent of overall time, and you don't have to eyeball functions to guess where it is.
A good profiler for linux that does this is Zoom. I'm sure there are others.
(Don't get confused about what matters. Efficiency and/or timing accuracy of a profiler is not important for helping you locate speed bugs. What's important is that it draws your attention to the right things.)
Personally, I use the random-pausing method, because I find it's the most effective.
Not only is it simple and requires no investment, but it finds speedup opportunities that are not localized to particular routines or lines of code, as in this example.
This is reflected in the speedup factors that can be achieved.
gprof is better than nothing. But not much. It estimates time spent not only in a function, but also in all of the functions called by the function - but beware it is an estimate, not a direct measurement. It does not make the distinction that some two callers of the same subroutine may have widely differing times spent inside it, per call. To do better than that you need a real call graph profiler, one that looks at several levels of the stack on a timer tick.
dtrace is good for what it does.
If you are doing low level performance optimization on x86, you should consider Intel's Vtune tool. Not only does it provide the best access I am aware of to low level performance measurement hardware on the chip, the so-called EMON (Event Monitoring) system (some of which I designed), but Vtune also has some pretty good high level tools. Such as call graph profiling that, I believe, is better than gprof. On the low level, I like doing things like generating profiles of the leading suspects, like branch mispredictions and cache misses, and looking at the code to see if there is something that can be done. Sometimes simple stuff, such as making an array size 255 rather than 256, helps a lot.
Generic Linux oprofile, http://oprofile.sourceforge.net/about/, is almost as good as Vtune, and better in some ways. And available for x86 and ARM. I haven't used it much, but I particularly like that you can use it in an almost completely non-intrusive manner, with no need to create the special -pg binary that gprof needs.
Their are many tools from where you can optimize your code,
For web application their are different tools to optimize the code i.e jzip compressors
e.g YUI Compressor etc.
For desktop application optimizing compiler is good.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
I would like to setup a server that stores prices data for potentially 100,000+ products over time. Updates will be provided once or twice per month.
I would also have many components of such prices. Such that I run out of the 255 fields that Access allows me, and would burst the 2 Gig limit sooner or later. (New fields might just pop up at any moment for some products)
The scale of this project is somehow too small to get database experts to do a full scale database at the moment. Is there any quick fix I can do with the free Microsoft SQL Server ?
Or I am going to run into hardware limitations also?
You need to be more specific about what you want. If you are using 255 fields, then your table is broken.
But, to answer your question, something like the Express edition(s) of SQL Server will have no problem at all handling 100,000 products (or millions of products, for that matter, assuming your hardware is decent).
PostgreSQL is an open source dbms that doesn't have size limitations like the free versions of SQL Server and Oracle do. All dbms are naturally constrained by available disk space and available swap space. (But they're not all constrained the same way.) Some free dbms are constrained to use a single CPU; I'm pretty sure SQL Server Express is one of them.
And most dbms have a row size limitation. Depending on the dbms, the row size limitation might be a hard limit (can't create a table that stores more than 'n' bytes per row) or a soft limit ("long" rows--or at least parts of them--are moved to a different part of the database, which affects performance).
You can google "row size limitation", or search SO for the same phrase.
It would be interesting to see the structure and functional dependencies for a table of prices that needed 255 columns.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 13 years ago.
Many enterprise applications I've used cause me much frustration, whether it's a bad UI/UX, sluggishness, or jumping through hoops to get something simple done. This is a completely different world from the open-source applications I've used. What problems have you had, and what do you think causes the problem? How can they be improved?
It's common to hear this from developers working on enterprise applications:
It will be on the Intranet anyway so bandwidth is not an issue. Let's not waste our time on optimizing and caching.
We'll just add another web server if the load goes high. The entire org is 15K users anyway.
The oldest machine is a 1.6Ghz dual core so let's not waste time on performance
This interface is a bit complicated but Phil said the guys in accounting are pretty smart. He'll have a 5 hour training session this Friday in which he'll explain the use to them
Conventional web applications don't have training session. They are designed for the lowest common denominator. They aim to optimize client, and more importantly server resources. There is no real ceiling on the size of the user base and hitting 100K users is a delight. And the criticism from the users usually equates to lost direct losses.
Another issue is that companies usually sign a contract for a software product and the software team is usually just aiming to deliver the "asks".
The enterprise is shielded from the criticism that open source projects face, and in many circumstances is a collective driven by upper management. Most of the initiatives are driven by the "reading" edge when an exec sees an article in mainstream publications on a plane, then comes and rattles the cages of the departments that they run. Generally a committee is formed that is called "the team" and they meet regularly and not when they decide they can't handle the risk themselves, need objective input, or some times protection from making decisions, hire a consultancy to come and deliver the "project".
Sometimes this process can work well when you have a dynamic team that can push back against the exec's wishes or challenge premises. The strong team dynamic can shorten the analysis cycle and in some cases a good product can be produced. Many times the team members just acquiesce to the exec's whims and make no decisions themselves and only work to carry out the exec's vision. No push back, no feed back, just subordinate common sense to collective's hive-mother who directs them.
As you can see the expense is not generated from productive work - it's caused by the series of cult of personality seances that pose for collaboration. Because the project took so long and drove up expenses, you have to live with the results. For years or until the exec moves on.
The companies that have figured that out are where you want to work if you interested in accomplishing great things. Or maybe they are a great place to work if you want a consulting gig where you get paid a lot and don't have much at stake.
There are many aspects, but I believe the ultimate root cause is that inevitably enterprise projects are neither requested, sold to, nor accepted by the people who will use the application.
As a result, they are massively overpriced to warrant CxO attention level budgets, derailed beyond control because of the massive team required to consume the budget, "processed" to death to keep the incredibly bloated team busy, mutilated and reduced to unusability in the endless cover your ass wars resulting from the happy sunshine delivery estimates that would have worked reasonably well with a team of 5 white ravens tackeling the 80% top value, but are 10 times too short for the 100 oddball consultants let loose in the never ending trench war making 0 progress while the 5 level reporting hierarchy is nervously green shifting the news up until the whole endeavour is quietly abandoned because the amount of money down the drain is now significant enough that even the customer does not want take the PR risk of owning up to the whole fiasco.
Have them go WHERE NO MAN HAS GONE BEFORE!
Haha. Really, though, I think the problem is overaggressive deadlines, lack of proper design, and lack of communication.