HDR & Panoramas: where to learn [closed] - objective-c

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I have some ideas for software that can create HDR images or panoramas. I'd like to learn how to do these myself, for example how to create algorithms for image alignment, combining parts of images for HDR & tonemapping, etc. (Preferably in C/Obj-C, though the concepts will apply to any language.) Where are the best places to learn about these things, and what might be some simple projects I could start with?
I'm near the fabulous Powell's Technical Bookstore, so I can easily take a trip there — if you have any specific recommendations for books I'd love to hear them.

This is probably way too late to help, but for anyone else out there hoping to start more or less from scratch learning about Panoramas and/or HDR imaging, I'd recommend starting by reading Richard Szeliski's excellent Panorama Tutorial. He's one of the leading names in Panoramic imaging research, and that tutorial gives a thorough overview of all aspects, from image formation basics to registration (bringing disparate images into a commong coordinate system), blending, ghost removal, etc. It also covers HDR aspects of Panoramic images such as how to combine differently exposed images into a panorama. He also recently published a computer vision textbook that would probably have a lot of useful info; I know it has at least a small section on HDR imaging. Draft versions of the book are available for free on the associated website.

One algorithm for image alignment is the Scale Invariant Feature Transform (and another, perhaps more approachable reference, and Google will probably turn up many more). You might find autopano-sift-C and/or the open-source parts of libpano useful, either directly or for inspiration.
[Perhaps somebody else can/will help you with the HDR part -- I won't have anything to do with that.]

Taking an HDR class at my university, I would advice "High Dynamic Range Imaging Acquisition Display and Image Based Lighting" book for basic knowledge. It has many sections that you may find best algorithms in literature.
For alignment, I recommend you to have a look at Greg Ward's widely used "Fast, Robust Image Registration for Compositing High Dynamic Range Photographs from Handheld Exposures" paper.
For coding part, HDR Toolbox by Francesco Banterle is very helpful if you are interested in Matlab!

Related

Documentation system that allows for reusability and Markdown support? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm investigating different documentation systems for a project keeping up. Most recently I've been using DITA and the DITA OT, but its complexity makes me want to shoot myself.
Are there any systems that provide the following functionality:
Markdown support
Reusable content (I can refer to previously defined paragraphs or terms)
Localization support
Preferably, free or open source
Preferably, allows for multiple output
I wish I could use Pandoc for this, but it doesn't appear to support reusable content.
Edit: I just ended up writing my own library for this: https://github.com/gjtorikian/markdown_conrefs
If you don't mind reStructuredText instead of markdown, Sphynx is worth a look.
You could use pandoc + pre or post processor.
That way you could easily implement snippet reuse.
This is a topic close to my heart. There's quite a lot of Markdown processor options out there, but at time of writing those are more a case of personal solutions to this persistent problem. We all tend to get frustrated, make something to help in the short term, and share it.
The challenge has been to extend this to something built for purpose and at scale. Which is where I've turned my focus to over the last few years. That includes first working on PressGang CCMS inside of a tech writing team at Red Hat, and then being inspired to spin out Corilla, a dedicated technical writing startup building the tool you require.
PressGang (the prototype)
Please refer to the PressGang CCMS project for an idea of what we did at Red Hat to build tools to solve this. The lead engineer did a run-through video that you can see on Vimeo, and I've created a public Amazon AMI if you wish to try it. It's not being maintained but it's all open source.
It's a relatively large stack written for the most part in Java, but was useful as a look into an open source project in this space. But with bias I'd suggest...
Corilla (the product)
We cofounded Corilla as an open source company to focus on bringing together the elements of content reuse and collaboration with the ease of Markdown and Asciidoc. I've spent years writing DocBook XML, and quickly built my own snippets for Sublime Text to minimise the considerable overhead of authoring in that markup. The tide is of course turning. We need easier ways to write faster, and we need them to be discoverable, reusable, and allow the entire team to generate the content in formats they require.
I'd encourage you to get involved with the beta, as the technical writing and developer community is driving the project, and as we solve our problems together. Being able to resource and drive this to market is far more rewarding than having to pick through incomplete processor chains. I've been there, it's time we did more.

How to document a system flow before coding it? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Can anybody help me with this?
Here's the problem...
When I have to code let's say, a registration form, I add the new form and start coding it. But sometimes the form is a bit complex and I find myself duplicating code and making the same verifications over and over again making the code messy.
I was wondering is there is some sort of tool that allows me to create a flow of this form before coding it, like a flow chart... where I can find such places where I'm duplicating code and then avoid that.
thanks!
Well real tool/language designed for this is UML. You can read up on it.
But its very strict. Altough you don't have to follow all specs and conventions. There are several types of diagram that cover pretty much everything. But AFAIK only 4 are practically in use.
Most people I know tend to draw Control flow diagrams
Google Docs drawing is perfectly fine for that.
But it depends on the type of application. I pesonally think more in data and like data flow diagrams.
I also like to design top-down. Other people do it differently. I mostly start with a sheet of paper and a pen and draw some stuff i could not tell what it means half an hour later. But I start very basic with application/database/user or something and when a picture arises i go into specifics using modeling tools.
I cannot design anything without knowing the greater picture, altough i know it is a software developers quality to just that.
ps: designing a form sounds very trivial at first, altough it might be not. but a great help
I think a great help is sticking to some programming patterns and paradigms you like. A good base is the MVC concept. I like to extend it with a "resource model" that does all the database stuff.
1) The best place to start is the white board. If your company doesn't have white boards, tell them to order some. Seriously. You will wonder how you lived without it.
2) Build a paper prototype with the stakeholders, or have them build one. They take maybe 30 minutes to make and solve a ton of UI arguments that otherwise would be "defects"
3) Code. That's the easy part.
4) Refactor as you fix defects. You'll notice better things you could have done, shortcuts, duplicate code. Take time to fix the defect correctly and code quality will improve. It's an iterative process.
5) Visio if you hand the process off (to support or whatever). This could be step 4 as kind of a state machine, but the paper prototypes should be enough of a process to get you started with enabling, disabling, etc.
If you're on the computer designing and writing code before you have a prototype and have white boarded everything out, you will have to invest a lot more time in the Refactor step. Visio and other state design applications will show you what happens, but the white board marker is the excalibur of the development world.
I know this doesn't answer the question you asked, verbatim; however, solid processes are infinitely more valuable than tools.

What is a good tool for graphing sub-millisecond timelines? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I'm trying to produce a timeline for my real-time embedded code.
I need to show all the interrupts, what triggers them, when they are serviced, how long they execute, etc. I have done the profiling and have the raw data, now I need a way to show the timeline graphically, to scale.
I've been searching for a good tool, but haven't come up with anything great yet. Everything that I've found works on timelines of days and years. I want a graph showing a single 2-millisecond cycle. For now I'm using Visio, but I keep thinking there must be something easier. Any ideas?
I'm hoping to produce something like this: .
Unfortunately, mine is more complicated, but that's the general idea.
So at that scale your abscissas is going to be a pure number (e.g. microseconds from the start time, or some such). Graphing tools to graph things like this are commonplace.
I'd suggest something like gnuplot, but I suspect there's more to the problem than is evident in your summary.
Ah, the picture makes it all much clearer. If gnuplot doesn't do it for you, I'll offer another suggestion (or at least tell you what I'd do): write it from scratch.
Specifically, I'd probably throw together something in a scripting language (ruby, python, whatever) to read the data and generate pic code that looked the way I wanted. If you decide to go that route, here's an overview of pic basics and also the manual. If you dig in you should have something plausible in an hour and within a week you'll have something that suits you better than any off the shelf GUI app ever will.
I feel for you. In my system, we have a 1.1 millisecond cycle and 13 measurement points over 4 different components. I suspect you're facing similar complexity.
Bad news is there are no off-the-shelf solutions I'm aware of. However MarkusQ is correct stating that you can use (abuse?) standard graphing packages to accomplish what you need. But you will need to invest some time to customize the output to your liking.
We make extensive use of the R Project driven by Python code via RPy R/Python bridge to generate our plots. This setup works very well for us and has enabled us to automate the process. Python is used to acquire and cleanse the data from the real-time system and R does the drawing.
R's graphics customization support is extensive allowing you to control all aspects of the plot, locations, sizes, etc. It can be intimidating at first, but there is an excellent book R Graphics that helps with a companion website that contains all of the book's examples.
Whatever you choose, make sure there's the ability to automate via scripting. The amount of data real-time systems generate is too much to deal with without flexible tools.
gtkwave could be used

Technical White paper: How to write one [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Folks,
What is the best way to go about researching and presenting for a technical whitepaper? I dont mean the format, overview, sections and such stuff.
I've never written one - and I wonder if a white paper needs to be very very generic (conceptual) or specific (for instance favouring a particular tool/methodology)
And if your answer favours the generic approach, I'd like to know how one can research for that. Is it better to focus on a smaller use-case scenario, start small, use a particular tool/method, gain good understanding and then research more and develop a wide-angle view on the subject?
Yes, try to read other technical white papers. But don't just read any white paper. Read the better ones. You can usually determined which is the "better" one by checking how many times the paper has been cited (One web site I go to for this is cite seer and google scholar). Some general guidelines would be:
Try to be straight to the point, don't beat around the bush.
Use your acronyms consistently.
Take the opportunity to state the weaknesses of previous methods, as it kinda shows you have put in effort to review/survey other's methods.
A technical paper needs to be very specific. State exactly how your method works, state exactly how you conduct the experiments (so that others can replicate your experiments), state exactly your findings (lots of graphs would be nice), and finally, conclude it in 40-60 words or so.
Emphasis on stuff that are new (Stuff that you are proposing) and less on stuff that are old (That would be your background). Make the distinction clear.
Generally, you don't include your source code in your paper. If you must, published in a web page together with links to your paper.
P/S: My advice is a bit biased towards academic paper. But I think it should apply in your case.
The purpose of a white paper is usually to advocate a particular point of view or propose a specific solution to a problem.
If, however, your white paper comes across as little more than marketing or a sales pitch, you won't have made a very good case. The conventional advice is that you must begin by articulating a need that your audience has (the 'pain point' in bizspeak) and address your solution to that need.
This sounds a bit unhelpful, but white papers come in all forms, from very specific to very general. Determine what the end goal is. Are you trying to sell something, or describe how a new technical widget works, or describe an experience? Also, determine your audience are they business, technical, home etc.
Take a look around at examples - most big companies (IBM etc) have hundreds on their website. Read a few and see what strikes you as the good and bad points.
My 0.02:
Read a couple, and try to make mindmaps trying to come up with an idea of how they look like.
After you did this analysis, go back and pick the sections your whitepaper is going to need. In particular, build ANOTHER mind map with your document structure.
Data is also an important way to convey information. So, think a while about Data Visualization tecniques before charting your data.
A whitepaper could be either general or very specific. It depends entirely on the subject, audience, and the intent.
For example, a paper on an R+D topic, or presented within academia, or designed to provide a conceptual sketch of some future work is going to be written in a more passive voice, almost Q+A format. A discussion. You'll probably present multiple ideas and might pro/con them without necessarily reaching a fixed conclusion.
A whitepaper on a particular technology, for a clients' benefit of clarification, or to illustrate or document some result will be very firm, fixed, and have definite conclusions. Numbers.
The only thing you can say generally is that the process works from the vague -> specific.
How to Write a White Paper – A White Paper on White Papers
The author of that piece has also written a book:
Writing White Papers: How to Capture Readers And Keep Them Engaged

Are there some projects that rate RPG source? like software metrics? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I just wanted to know if you know of some projects that can help to decide whether the analyzed Source it is good code or bad RPG code.
I'm thinking on the terms of Software metric, McCabe Cyclomatic Number and all those things.
I know that those numbers are mere a hunch or two, but if you can present your management a point score they are happy and i get to modernize all those programs that otherwise work as specified but are painful to maintain.
so yeah .. know any code analyzers for (ILE)RPG ?
We have developed a tool called SourceMeter that can analyze source code conforming to RPG III and RPG IV versions (including free-form as well). It provides you the McCabe Cyclomatic Number and many other source code metrics that you can use to rate your RPG code.
If the issue is that the programs are painful to maintain, then the metric should reflect how how much pain is involved with maintaining them, such as "time to implement new feature X" vs "estimated time if codebase wasn't a steaming POS".
However, those are subjective (and always will be). IMO you're probably better off refactoring mercilessly to remove pain points from your development. You may want to look at the techniques of strangler applications to bring in a more modern platform to deliver new features without resorting to a Big Bang rewrite.
The SD Source Code Search Engine (SCSE) is a tool for rapidly search very large set of source code, using the langauge structure of each file to index the file according to code elements (identifiers, operators, constants, string literals, comments). The SD Source code engine is usable with a wide variety of langauges such as C, C++, C#, Java ... and there's a draft version of RPG.
To the OP's original question, the SCSE engine happens to compute various metrics over files as it indexes them, including SLOC, Comments, blank lines, and Halstead and Cyclomatic Complexity measures. The metrics are made available as byprooduct of the indexing step. Thus, various metrics for RPG could be obtained.
I've never seen one, although I wrote a primitive analyser for RPG400. With the advent of free form and subprocedures, it was too time consuming to modify. I wish there was an API that let me have access to the compiler lexical tables.
If you wanted to try it yourself, consider the notion of reading the bottom of the compiler listing and using the line numbers to at least get an idea of how long a variable lives. For instance, a global variable is 'worse' than a local variable. That can only be a guess because of GOTO and EXSR.
Lot of work.