Why lots of things works 2^n in processors archectures? - embedded

I am an freshgraduated engineer, so that ı want to learn some basics in these area.
I want to learn why everything is referred with 2 and its overs in microprocessors?
Why our storages are 256kb, 1024kb? Why our microprocessors are 4-8-16-32 bits?

Related

Choosing a chat-bot framework for data science research project and understanding the hidden costs of the development and rollout?

The question is about using a chat-bot framework in a research study, where one would like to measure the improvement of a rule-based decision process over time.
For example, we would like to understand how to improve the process of medical condition identification (and treatment) using the minimal set of guided questions and patient interaction.
Medical condition can be formulated into a work-flow rules by doctors; possible technical approach for such study would be developing an app or web site that can be accessed by patients, where they can ask free text questions that a predefined rule-based chat-bot will address. During the study there will be a doctor monitoring the collected data and improving the rules and the possible responses (and also provide new responses when the workflow has reached a dead-end), we do plan to collect the conversations and apply machine learning to generate improved work-flow tree (and questions) over time, however the plan is to do any data analysis and processing offline, there is no intention of building a full product.
This is a low budget academy study, and the PHD student has good development skills and data science knowledge (python) and will be accompanied by a fellow student that will work on the engineering side. One of the conversational-AI options recommended for data scientists was RASA.
I invested the last few days reading and playing with several chat-bots solutions: RASA, Botpress, also looked at Dialogflow and read tons of comparison material which makes it more challenging.
From the sources on the internet it seems that RASA might be a better fit for data science projects, however it would be great to get a sense of the real learning curve and how fast one can expect to have a working bot, and the especially one that has to continuously update the rules.
Few things to clarify, We do have data to generate the questions and in touch with doctors to improve the quality, it seems that we need a way to introduce participants with multiple choices and provide answers (not just free text), being in the research side there is also no need to align with any specific big provider (i.e. Google, Amazon or Microsoft) unless it has a benefit, the important consideration are time, money and felxability, we would like to have a working approach in few weeks (and continuously improve it) the whole experiment will run for no more than 3-4 months. We do need to be able to extract all the data. We are not sure about which channel is best for such study WhatsApp? Website? Other? and what are the involved complexities?
Any thoughts about the challenges and considerations about dealing with chat-bots would be valuable.

Robot odometry in labview

I am currently working on a (school-)project involving a robot having to navigate a corn field.
We need to make the complete software in NI Labview.
Because of the tasks the robot has to be able to perform the robot has to know it's position.
As sensors we have a 6-DOF IMU, some unrealiable wheel encoders and a 2D laser scanner (SICK TIM351).
Until now I am unable to figure out any algorithms or tutorials, and thus really stuck on this problem.
I am wondering if anyone ever attempted in making SLAM work in labview, and if so are there any examples or explanations to do this?
Or is there perhaps a toolkit for LabVIEW that contains this function/algorithm?
Kind regards,
Jesse Bax
3rd year mechatronic student
As Slavo mentioned, there's the LabVIEW Robotics module that contains algorithms like A* for pathfinding. But there's not very much there that can help you solve the SLAM problem, that I am aware of. The SLAM problem consist of the following parts: Landmark extraction, data association, state estimation and updating of state.
For landmark extraction, you have to pick one or multiple features that you want the robot to recognize. This can for example be a corner or a line(wall in 3D). You can for example use clustering, split and merge or the RANSAC algorithm. I believe your laser scanner extract and store the points in a list sorted by angle, this makes the Split and Merge algorithm very feasible. Although RANSAC is the most accurate of them, but also has a higher complexity. I recommend starting with some optimal data points for testing the line extraction. You can for example put your laser scanner in a small room with straight walls and perform one scan and save it to an array or a file. Make sure the contour is a bit more complex than just four walls. And remove noise either before or after measurement.
I haven't read up on good methods for data association, but you could for example just consider a landmark new if it is a certain distance away from any existing landmarks or update an old landmark if not.
State estimation and updating of state can be achieved with the complementary filter or the Extended Kalman Filter (EKF). EKF is the de facto for nonlinear state estimation [1] and tend to work very well in practice. The theory behind EKF is quite though, but it should be a tad easier to implement. I would recommend using the MathScript module if you are going to program EKF. The point of these two filters are to estimate the position of the robot from the wheel encoders and landmarks extracted from the laser scanner.
As the SLAM problem is a big task, I would recommend program it in multiple smaller SubVI's. So that you can properly test your parts without too much added complexity.
There's also a lot of good papers on SLAM.
http://www.cs.berkeley.edu/~pabbeel/cs287-fa09/readings/Durrant-Whyte_Bailey_SLAM-tutorial-I.pdf
http://ocw.mit.edu/courses/aeronautics-and-astronautics/16-412j-cognitive-robotics-spring-2005/projects/1aslam_blas_repo.pdf
The book "Probabalistic Robotics".
https://wiki.csem.flinders.edu.au/pub/CSEMThesisProjects/ProjectSmit0949/Thesis.pdf
LabVIEW provides LabVIEW Robotics module. There are also plenty of templates for robotics module. Firstly you can check the Starter Kit 2.0 template Which will provide you simple working self driving robot project. You can base on such template and develop your own application from working model, not from scratch.

Models for 3d game programming? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm a beginner in game development and game programming. I have experience in computer graphics - mainly OpenGL
In those days Finally, I have some spare time to polish my game coding skills.
But when coming to program a simple 3d game, I couldn't find any good resource for free textures and models for 3d graphics (for 2d game for example, I found many resources for sprite sheets and so on).
Is there any good resource you're familiar with for 3d game textures/models?
This is not a programming queston.
As far as I know, good, free and high-quality modeling resources does not exist (from "good", "free" and "high-quality", select two).
There are multiple free model repositories, but quality of content is generally poor, and there are few places where you can buy models.
There are free textures in multiple places (like this one), and they are easier to find than good free models.
Also, most of free content frequently includes some kind of catch - "non-commercial use only", "creative commons share alike"(i.e. if you make derivative, it should use same license), or it is under GPL.
Anyway, if you're okay with "Creative Commons share alike" and GPL, then you can probably use content from some of opensource games (OpenArena ), and get quite a lot of textures from wikipedia or wikimedia commons, flickr, and you can google for "free textures". You should be careful about using content from opensource games - some opensource projects (like war$ow and sauerbraten) use closed-source/restricted licenses for game content (i.e. you're free to reuse modify engine, but you cannot modify game content and you cannot use it with modified engine. Reasons are pretty obvious).
Anyway, it depends on what kind of model you want. It is pretty easy to find "easy" stuff like boxes, barrels, etc, because everyone can do that. When it comes to guns and vehicles, there will be a trouble - quality will drop, and number of good models will decrease. And if you want a fully rigged animated character with multiple animation, normally you can forget about it - such content is almost impossible to find. But you can probably use mods for Q3 and Q2 if you want characters (you can forget about physics in this case, though)
I'd recommend to forget about "free stuff", and try to make content yourself or hire someone to do that.
If you decide to make content yourself, then you'll need digital photo camera and (optionally) graphic tablet. You can make mediocre textures from photos (digital camera is cheap) using gimp, gimp-resynthesizer plugin, gimp-texturisze plugin, high-pass filters, etc. You can also make normal maps using blender or gimp, and there are even tutorials about extracting them from photos (you still will need to process them by hand). Modeling and animation can be done in blender (after 1 or two weeks of training) using reference photos. Low poly modeling is pretty quick (20 minutes to make a low-poly low-quality gun, hour or two to make simple character), but texture and animation will take more (setting up animation for character can take a few hours for amateur, making one animation for character will take at least several hours as well, making texture unwrap - hour, painting texture - up to few days, depending on quality you want, available reference material, availability of graphic tablet, etc). It is possible to cut corners a bit - for example, for making animations, you can film motion using photo camera(or video camera), and then use it for rotoscoping. Also, you'll need to find some kind of model format blender can export to, or you'll have to write an export plugin in python.
The Blender foundation has a large model repository which may be of use.
There are some free models at Turbosquid that I use sometimes for my XNA games.
But of course, the best stuff is not free.
My experience is that there is very little in the way of quality 3d models with animation and full rigging freely available. There a few companies like this who sell suitable models cheaply and I guess most hobbyists could afford one or two models from them fairly easily which would probably be sufficient for learning. (I have no connection to them but I did buy one model pack from them which I quite liked)
It would be nice if there were a few more generally freely available 3d animated models around though. I even think it might be in the interests of some of the companies that make them to give a few away. If I'd been able to get further in my hobby projects I might have spent £100-200 in total on some nice model packs to make my project better, but due to the lack of any real 3d animated models I ended up losing interest in all my 3d projects before I got to the point of thinking maybe I'd spend a little money on this hobby. I wonder if the availability of a few more free quality models would actually significantly increase the size of the market for those companies as more people got their projects to the point where they were willing to spend a little money on it.
Some company should make a nice model pack with a few static models and a couple of fully rigged and animated humans and "monsters" and say that if the community donates £10000 they'll release them for free use. I suspect there are enough people out there who would like a few quality models they might reach this target in the same way that Blender was originally sold to the public.
I know that it's been a long time since this question was asked, but I ran into same problem when programming in XNA and I found a good solution. As long as you don't need rigged / animated models, Google Warehouse is the best place to search. As far as I know, each model submitted to Google Warehouse is available on Creative Commons license. You just need to:
Download and install Google Sketchup (Sketchup download)
Browse to find a model (Google Warehouse) - there's a 3D preview for each one!
Get a plugin to export Sketchup models to .X - I recommend the '3D RAD' plugin (3D RAD download)
If your model does not look good after the export, try to separate it into several less complex ones.
you are looking for open game art ...
http://thefree3dmodels.com/ has a multitude of free 3D models. I've used a few of these for animation purpose, maybe it'll help you too.

Neural Networks Project? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm looking for ideas for a Neural Networks project that I could complete in about a month or so. I'm doing it for the National Science Fair, so I need something that has some curb appeal as well since it's being judged.
It doesn't necessarily have to be completely new and unique, I'm just looking for ideas, but it should be complex enough that it would impress someone who knows about the field. My first idea was to implement a spam filter of sorts, but I recently found out that NN's aren't a very good way to do it. I've already got a basic NN simulator with Genetic Algorithms, and I'm also adding the the generic back-propagation algorithms as well.
Any ideas?
Look into Numenta's Hierarchical Temporal Memory (HTM) concept. This may be slightly off topic if the expectation is of "traditional" Neural Nets, but it is also an extremely promising avenue for Artificial Intelligence.
Although Numenta introduced HTM and its associated software platform, NuPIC, almost five years ago, the first commercial product based upon this technology was released (in beta) a few weeks ago by Vitamin D. It is called Vitamin D Video and essentially turns any webcam or IP camera into a sophisticated video monitoring system, recognizing classes of items (say persons vs. cats or other animals) in the video feed.
With the proper setup, this type of application could make for an interesting display at the Science Fair, one with much "curb appeal".
To wet your appetite or even get your feet wet with HTM technology you can download NuPIC and check its various sample applications. Chances are that you may find something that meets typical criteria of both geekness and coolness for science fairs.
Generally, HTMs aim at solving problems which are simple for humans but difficult for computers; such a statement is somewhat of a generic/applicable to Neural Nets, but HTMs take this to the "next level".
Although written in C (I think) NuPIC is typically interfaced in Python, which makes it a convenient test bed for simple yet sophisticated proofs of concept applications.
You could always try to play around with a neural network and stock courses, if I had a month of spare time for a neural network implementation, thats what I would play with.
A friend of mine in college wrote a NN to play go on a 9x9 board.
I don't think it ever got very good, but I think it would be fun to try.
Look on how a bidirectional associative memory compare with other classical edit distance algorithms (Levenshtein, Damerau-Levenshtein etc) for typo correction. Also consider the articles on hebbian unlearning while training your NN - it seems that the confabulation phenomena is avoided.
I've done some works on top of NN, mainly an XML based language (Neural XML). See details here
http://amazedsaint.blogspot.com/search/label/Neural%20Network
Also, one interesting .NET Neural network project is Aforge.net - Check out that as well..
You can implement the game Cellz or create a controller for it. It was first created by Simon M Lucas. It's a nice and interesting game, and i'm sure that everyone will love it. I used it also for a school project and it turned out very ok.
You can find in that page some links to other interesting games.
How about applying it to predicting exchange rate (USD - EUR for example for sub minute trading) should be fun to show net gain of money over 1 month.
I doubt this will work for trades longer than a minute... without a lot of extra work.
I like using committee machines so why not apply it to Face-Detection in images / movies or voice print authentication.
Finally you could get it to play pleasing music and use a crowd sourcing fitness function whereby people vote for the best "musicians"

Can you program a pure GPU game?

I'm a CS master student, and next semester I will have to start working on my thesis. I've had trouble coming up with a thesis idea, but I decided it will be related to Computer Graphics as I'm passionate about game development and wish to work as a professional game programmer one day.
Unfortunately I'm kinda new to the field of 3D Computer Graphics, I took an undergraduate course on the subject and hope to take an advanced course next semester, and I'm already reading a variety of books and articles to learn more. Still, my supervisor thinks its better if I come up with a general thesis idea now and then spend time learning about it in preparation for doing my thesis proposal. My supervisor has supplied me with some good ideas but I'd rather do something more interesting on my own, which hopefully has to do with games and gives me more opportunities to learn more about the field. I don't care if it's already been done, for me the thesis is more of an opportunity to learn about things in depth and to do substantial work on my own.
I don't know much about GPU programming and I'm still learning about shaders and languages like CUDA. One idea I had is to program an entire game (or as much as possible) on the GPU, including all the game logic, AI, and tests. This is inspired by reading papers on GPGPU and questions like this one I don't know how feasible that is with my knowledge, and my supervisor doesn't know a lot about recent GPUs. I'm sure with time I will be able to answer this question on my own, but it'd be handy if I could know the answer in advance so I could also consider other ideas.
So, if you've got this far, my question: Using only shaders or something like CUDA, can you make a full, simple 3D game that exploits the raw power and parallelism of GPUs? Or am I missing some limitation or difference between GPUs and CPUs that will always make a large portion of my code bound to CPU? I've read about physics engines running on the GPU, so why not everything else?
DISCLAIMER: I've done a PhD, but have never supervised a student of my own, so take all of what I'm about to say with a grain of salt!
I think trying to force as much of a game as possible onto a GPU is a great way to start off your project, but eventually the point of your work should be: "There's this thing that's an important part of many games, but in it's present state doesn't fit well on a GPU: here is how I modified it so it would fit well".
For instance, fortran mentioned that AI algorithms are a problem because they tend to rely on recursion. True, but, this is not necessarily a deal-breaker: the art of converting recursive algorithms into an iterative form is looked upon favorably by the academic community, and would form a nice center-piece for your thesis.
However, as a masters student, you haven't got much time so you would really need to identify the kernel of interest very quickly. I would not bother trying to get the whole game to actually fit onto the GPU as part of the outcome of your masters: I would treat it as an exercise just to see which part won't fit, and then focus on that part alone.
But be careful with your choice of supervisor. If your supervisor doesn't have any relevant experience, you should pick someone else who does.
I'm still waiting for a Gameboy Emulator that runs entirely on the GPU, which is just fed the game ROM itself and current user input and results in a texture displaying the game - maybe a second texture for sound output :)
The main problem is that you can't access persistent storage, user input or audio output from a GPU. These parts have to be on the CPU, by definition (even though cards with HDMI have audio output, but I think you can't control it from the GPU). Apart from that, you can already push large parts of the game code into the GPU, but I think it's not enough for a 3D game, since someone has to feed the 3D data into the GPU and tell it which shaders should apply to which part. You can't really randomly access data on the GPU or run arbitrary code, someone has to do the setup.
Some time ago, you would just setup a texture with the source data, a render target for the result data, and a pixel shader that would do the transformation. Then you rendered a quad with the shader to the render target, which would perform the calculations, and then read the texture back (or use it for further rendering). Today, things have been made simpler by the fourth and fifth generation of shaders (Shader Model 4.0 and whatever is in DirectX 11), so you can have larger shaders and access memory more easily. But still they have to be setup from the outside, and I don't know how things are today regarding keeping data between frames. In worst case, the CPU has to read back from the GPU and push again to retain game data, which is always a slow thing to do. But if you can really get to a point where a single generic setup/rendering cycle would be sufficient for your game to run, you could say that the game runs on the GPU. The code would be quite different from normal game code, though. Most of the performance of GPUs comes from the fact that they execute the same program in hundreds or even thousands of parallel shading units, and you can't just write a shader that can draw an image to a certain position. A pixel shader always runs, by definition, on one pixel, and the other shaders can do things on arbitrary coordinates, but they don't deal with pixels. It won't be easy, I guess.
I'd suggest just trying out the points I said. The most important is retaining state between frames, in my opinion, because if you can't retain all data, all is impossible.
First, Im not a computer engineer so my assumptions cannot even be a grain of salt, maybe nano scale.
Artificial intelligence? No problem.There are countless neural network examples running in parallel in google. Example: http://www.heatonresearch.com/encog
Pathfinding? You just try some parallel pathfinding algorithms that are already on internet. Just one of them: https://graphics.tudelft.nl/Publications-new/2012/BB12a/BB12a.pdf
Drawing? Use interoperability of dx or gl with cuda or cl so drawing doesnt cross pci-e lane. Can even do raytracing at corners so no z-fighting anymore, even going pure raytraced screen is doable with mainstream gpu using a low depth limit.
Physics? The easiest part, just iterate a simple Euler or Verlet integration and frequently stability checks if order of error is big.
Map/terrain generation? You just need a Mersenne-twister and a triangulator.
Save game? Sure, you can compress the data parallelly before writing to a buffer. Then a scheduler writes that data piece by piece to HDD through DMA so no lag.
Recursion? Write your own stack algorithm using main vram, not local memory so other kernels can run in wavefronts and GPU occupation is better.
Too much integer needed? You can cast to a float then do 50-100 calcs using all cores then cast the result back to integer.
Too much branching? Compute both cases if they are simple, so every core is in line and finish in sync. If not, then you can just put a branch predictor of yourself so the next time, it predicts better than the hardware(could it be?) with your own genuine algorithm.
Too much memory needed? You can add another GPU to system and open DMA channel or a CF/SLI for faster communication.
Hardest part in my opinion is the object oriented design since it is very weird and hardware dependent to build pseudo objects in gpu. Objects should be represented in host(cpu) memory but they must be separated over many arrays in gpu to be efficient. Example objects in host memory: orc1xy_orc2xy_orc3xy. Example objects in gpu memory: orc1_x__orc2_x__ ... orc1_y__orc2_y__ ...
The answer has already been chosen 6 years ago but for those interested to the actual question, Shadertoy, a live-coding WebGL platform, recently added the "multipass" feature allowing preservation of state.
Here's a live demo of the Bricks game running on Gpu.
I don't care if it's already been
done, for me the thesis is more of an
opportunity to learn about things in
depth and to do substantial work on my
own.
Then your idea of what a thesis is is completely wrong. A thesis must be an original research. --> edit: I was thinking about a PhD thesis, not a master thesis ^_^
About your question, the GPU's instruction sets and capabilities are very specific to vector floating point operations. The game logic usually does little floating point, and much logic (branches and decision trees).
If you take a look to the CUDA wikipedia page you will see:
It uses a recursion-free,
function-pointer-free subset of the C
language
So forget about implementing any AI algorithms there, that are essentially recursive (like A* for pathfinding). Maybe you could simulate the recursion with stacks, but if it's not allowed explicitly it should be for a reason. Not having function pointers also limits somewhat the ability to use dispatch tables for handling the different actions depending on state of the game (you could use again chained if-else constructions, but something smells bad there).
Those limitations in the language reflect that the underlying HW is mostly thought to do streaming processing tasks. Of course there are workarounds (stacks, chained if-else), and you could theoretically implement almost any algorithm there, but they will probably make the performance suck a lot.
The other point is about handling the IO, as already mentioned up there, this is a task for the main CPU (because it is the one that executes the OS).
It is viable to do a masters thesis on a subject and with tools that you are, when you begin, unfamiliar. However, its a big chance to take!
Of course a masters thesis should be fun. But ultimately, its imperative that you pass with distinction and that might mean tackling a difficult subject that you have already mastered.
Equally important is your supervisor. Its imperative that you tackle some problem they show an interest in - that they are themselves familiar with - so that they can become interested in helping you get a great grade.
You've had lots of hobby time for scratching itches, you'll have lots more hobby time in the future too no doubt. But master thesis time is not the time for hobbies unfortunately.
Whilst GPUs today have got some immense computational power, they are, regardless of things like CUDA and OpenCL limited to a restricted set of uses, whereas the CPU is more suited towards computing general things, with extensions like SSE to speed up specific common tasks. If I'm not mistaken, some GPUs have the inability to do a division of two floating point integers in hardware. Certainly things have improved greatly compared to 5 years ago.
It'd be impossible to develop a game to run entirely in a GPU - it would need the CPU at some stage to execute something, however making a GPU perform more than just the graphics (and physics even) of a game would certainly be interesting, with the catch that game developers for PC have the biggest issue of having to contend with a variety of machine specification, and thus have to restrict themselves to incorporating backwards compatibility, complicating things. The architecture of a system will be a crucial issue - for example the Playstation 3 has the ability to do multi gigabytes a second of throughput between the CPU and RAM, GPU and Video RAM, however the CPU accessing GPU memory peaks out just past 12MiB/s.
The approach you may be looking for is called "GPGPU" for "General Purpose GPU". Good starting points may be:
http://en.wikipedia.org/wiki/GPGPU
http://gpgpu.org/
Rumors about spectacular successes in this approach have been around for a few years now, but I suspect that this will become everyday practice in a few years (unless CPU architectures change a lot, and make it obsolete).
The key here is parallelism: if you have a problem where you need a large number of parallel processing units. Thus, maybe neural networks or genetic algorithms may be a good range of problems to attack with the power of a GPU. Maybe also looking for vulnerabilities in cryptographic hashes (cracking the DES on a GPU would make a nice thesis, I imagine :)). But problems requiring high-speed serial processing don't seem so much suited for the GPU. So emulating a GameBoy may be out of scope. (But emulating a cluster of low-power machines might be considered.)
I would think a project dealing with a game architecture that targets multiple core CPUs and GPUs would be interesting. I think this is still an area where a lot of work is being done. In order to take advantage of current and future computer hardware, new game architectures are going to be needed. I went to GDC 2008 and there were ome talks related to this. Gamebryo had an interesting approach where they create threads for processing computations. You can designate the number of cores you want to use so that if you don't starve out other libraries that might be multi-core. I imagine the computations could be targeted to GPUs as well.
Other approaches included targeting different systems for different cores so that computations could be done in parallel. For instance, the first split a talk suggested was to put the renderer on its own core and the rest of the game on another. There are other more complex techniques but it all basically boils down to how do you get the data around to the different cores.