Technical White paper: How to write one [closed] - documentation

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Folks,
What is the best way to go about researching and presenting for a technical whitepaper? I dont mean the format, overview, sections and such stuff.
I've never written one - and I wonder if a white paper needs to be very very generic (conceptual) or specific (for instance favouring a particular tool/methodology)
And if your answer favours the generic approach, I'd like to know how one can research for that. Is it better to focus on a smaller use-case scenario, start small, use a particular tool/method, gain good understanding and then research more and develop a wide-angle view on the subject?

Yes, try to read other technical white papers. But don't just read any white paper. Read the better ones. You can usually determined which is the "better" one by checking how many times the paper has been cited (One web site I go to for this is cite seer and google scholar). Some general guidelines would be:
Try to be straight to the point, don't beat around the bush.
Use your acronyms consistently.
Take the opportunity to state the weaknesses of previous methods, as it kinda shows you have put in effort to review/survey other's methods.
A technical paper needs to be very specific. State exactly how your method works, state exactly how you conduct the experiments (so that others can replicate your experiments), state exactly your findings (lots of graphs would be nice), and finally, conclude it in 40-60 words or so.
Emphasis on stuff that are new (Stuff that you are proposing) and less on stuff that are old (That would be your background). Make the distinction clear.
Generally, you don't include your source code in your paper. If you must, published in a web page together with links to your paper.
P/S: My advice is a bit biased towards academic paper. But I think it should apply in your case.

The purpose of a white paper is usually to advocate a particular point of view or propose a specific solution to a problem.
If, however, your white paper comes across as little more than marketing or a sales pitch, you won't have made a very good case. The conventional advice is that you must begin by articulating a need that your audience has (the 'pain point' in bizspeak) and address your solution to that need.

This sounds a bit unhelpful, but white papers come in all forms, from very specific to very general. Determine what the end goal is. Are you trying to sell something, or describe how a new technical widget works, or describe an experience? Also, determine your audience are they business, technical, home etc.
Take a look around at examples - most big companies (IBM etc) have hundreds on their website. Read a few and see what strikes you as the good and bad points.

My 0.02:
Read a couple, and try to make mindmaps trying to come up with an idea of how they look like.
After you did this analysis, go back and pick the sections your whitepaper is going to need. In particular, build ANOTHER mind map with your document structure.
Data is also an important way to convey information. So, think a while about Data Visualization tecniques before charting your data.

A whitepaper could be either general or very specific. It depends entirely on the subject, audience, and the intent.
For example, a paper on an R+D topic, or presented within academia, or designed to provide a conceptual sketch of some future work is going to be written in a more passive voice, almost Q+A format. A discussion. You'll probably present multiple ideas and might pro/con them without necessarily reaching a fixed conclusion.
A whitepaper on a particular technology, for a clients' benefit of clarification, or to illustrate or document some result will be very firm, fixed, and have definite conclusions. Numbers.
The only thing you can say generally is that the process works from the vague -> specific.

How to Write a White Paper – A White Paper on White Papers
The author of that piece has also written a book:
Writing White Papers: How to Capture Readers And Keep Them Engaged

Related

Database documentation and reverse engineering [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I know this question has been asked in various forms over and over again but I cannot find a single complete answer and I believe it is a general problem in the RDBMS/data area and industry. To explain the problem I will tell you a short (and maybe boring) story!
The Story
"I have a friend" who works in A company that uses 100+ systems. The scale and size of these systems vary from full-blown ITIL to custom/in-house, single purpose, LAMP/SQLite/CSV-based solutions. The vast majority of these systems, at one point or another use a database/data-store... Big-data has now become a trend, and A company though it is a very good idea to keep (or log) historical data from all systems forever! For that reason they have built a "warehouse". My friend is responsible for writing software that will do the analysis on some of this data ... however, he is kinda confused. There are thousands of tables in that warehouse containing data from the beginning of time (1970s I think :)).
The Problem
[Since I started telling you about this guy, I should probably continue]
My friend is very upset because of the lack of documentation in that warehouse. It seems that no-one knows what is what?! Few of the problems he faces (and I quote):
Man, some fields are constants... they have a special meaning to the application but I have no way of knowing? But that is OK... cause some other fields are bit-masks! Different bit values in the field have different meaning!
and he continues...
That's not all... these are the easy cases you know... Since we have data from multiple systems, we end up with a situation where different systems refer to the same thing in a different way... how can I explain it to you... for example, a network device has an FQDN, however some systems treat it as the primary key, some others don't and instead they allocate an auto-increment integer value, which in turn they use for foreign keys (you know... referencing this device).
and he can go on forever!
The Question
[Yes, it is one question]
He says:
We have come a long way regarding documentation in the software world... we have started with documents, moved to wikis and concluded to inline docblocks serving both as parameter/signature documentation and as wiki! We can auto-generate documentation, clear enough that a person in the other hemisphere and side of this world can easily follow!
and he continues:
... in the data side of things, we also had major achievements! Storage methods, serialization, transmission and data analysis techniques have evolved tremendously... We have also managed to map database tables into objects and in some cases we can even represent relationships!
So why the frell don't we have a standard method/technique of documenting our data structures in an RDBMS?
... he concluded :)
Enough with my friend, so my comments:
I know about comments on fields in various systems, but that is usually enough for a "deprecated" and not for an explanation
Updating a wiki or even worse a document, every time you release a database patch is not a solution... that patch should contain the relevant documentation!
ER diagrams can be easily generated based on the schema information, however this is not the easiest form of documentation to read... for anything more than 10 tables!
There is the saying (please comment if you know who said this! - respect)
Documentation is like sex: when it is good, it is very, very good; and when it is bad, it is better than nothing
Why SQL doesn't provide the means to any?
Any kind of documentation would get stalled if not maintained.
Also, the SQL world provides all kind possibilities to document things:
comments in SQL files
comments in columns/tables metadata
as you said - E/R diagrams
the classic way of documenting stuff - docs and wikis
good discipline in adhering to an intuitive naming scheme for the things in DB - I think this should be the standard
We have all the tools we need, we just have to convince our managers to let us write the docs (lol)

Appreciating the value of good design [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Having recently worked with a bunch of people (from two different companies) right out of school or with 1-2 years experience, I was initially quite impressed with their knowledge of the various industry buzz words and design patterns etc. Furthermore they each had a good understanding on OO design principle and use of interface.
To cut a long story short…. In just a few days of working with them I found that things were not as they appeared.
Let me define some terms I’ll use here
Knowledge – Something you learn either in school or book or on the Internet etc.
Experience – The amount of time you’ve been doing something
Skill – Only gained through experience. That is acquiring skill (over time) and knowing how to apply the knowledge you have
What I found was that even though they knew this stuff, they really didn’t know how to apply this knowledge. You’d have all these patterns waving in your face but any code they had to write of their own accord had basic flaws with it. They could tell you the virtues of a certain design pattern and could come up with somewhat of an implementation but could not recognize basic flaws in the design.
Of course I had my fair share of the “One who knows not that he knows not –Confucius”.
Each night I’d spend a lot of time re-iterating everything that was said during the day trying to understand who was saying what and why, trying to figure out what I could do by way of examples during training or code review. But frankly I was quite puzzled.
After about 2-3 weeks I started to figure it out.
Anyway, the questions first
1. Have you experience this sort of thing?
2. How did you (or do you) tackle this?
My conclusion was that either schools are doing a bad job or Google is their friend and they’re getting all this “knowledge” and think they know.
But I feel
In order to be able to recognize and appreciate good design one MUST write code that is well,… not so well designed. Struggle with it and then fix it to know the pain and therefore recognize good and bad design and appreciate it
Practice and Experience – you just can’t beat that. There is so much that experience (and the quality of experience) brings to the table that you just can’t match it with just knowledge or a little bit of experience.
Some other things I experienced:
“Why is this an interface and not a base class” – you’ll get all kinds of answers but none of them is the right reason.
Why this design pattern and not that, or forget design patterns for a minute and just design (they’re utterly lost – that’s when you see their real design coding skills)
Over engineering – don’t recognize it and can’t appreciate they it could be a maintenance nightmare as the system grows. I found this to be a big issue. It's as if everything has the potential to change. A simple process of sending an email has 3 classes in addition to the various classes the in the .NET framework you'd use to send an email.
Using all the new features in the framework or language just because (I’ve even seen this in some of Microsoft’s source code for a certain framework for which source code is available)
So 10 years from now, everyone writing code is writing it using all the fancy framework or language features using all the possible design patterns, such that “legacy” code is well written and well designed. Or is it? What do you think?
Does anyone else feel that 10 years from now we’ll just be shifting through a different kind of muck. Muck that’s scattered about in a dozen more code files then it used to be because now we’ve got classes and so called loosely coupled code but it’s just a different kind of mess and in fact harder to clean up?
Interesting deliberations. I have always felt that with time we are over engineering our systems with all the patterns flying around. An extra layer of abstraction means more failure in understanding in future. My personal approach is to keep things simple and only introduce complexity if it is required. Decouple if decoupling is required. Many of the design requirements do flow in systems because we blindly put in requirements document that it should be maintainable, reliable and all *able. It's also necessary to understand the degree in which we want these *ables and more importantly how they impact our budget and business values both in shorter and longer term.
One important aspect is always to keep a very tight focus on business requirements, at every stage, both in terms of functionality and budget.
I completely agree that the newer breed of developers appear to be very knowledgeable when it comes to design patterns and the latest buzzwords like hibernate, jason, nant, ajax etc. In the other hand I have found that even the best among them, those who can be considered star programmers appear to have limited understanding and knowledge of what is really happening under the hood.
I had a several conversations in the past with young guns who were viewing spring as a major innovation trying to convince them that what this framework is providing us through reflection consist the evolution of things like IDL, type libraries, COM and CORBA.
When it comes to design patterns and the terminology introduced by the gang of four, we all know that their proposed architectures have been used for decades before and a senior developer was using them almost intuitively without knowing the formal differences of a regular factory versus an abstract. There is no doubt of course that the formalization that was introduced by the movement of DP was beneficial for the industry although the recognition and successful implementation of patterns still (and probably always) rely in the experience and talent of the developer since this process is impossible to become a purely mechanical and deterministic.
An additional point I have to make regarding newcomers to the field of SD is their inclination to spread their skill set very horizontally, trying to cover as many technologies possible as opposed to deeply concentrate in a specific domain and master it.

Any good tutorials or resources for learning how to design a scalable and "component" based game 'framework'? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
In short I'm creating a 2D mmorpg and unlike my last "mmo" I started developing I want to make sure that this one will scale well and work well when I want to add new in-game features or modify existing ones.
With my last attempt with an avatar chat within the first few thousand lines of code and just getting basic features added into the game I seen my code quality lowering and my ability to add new features or modify old ones was getting lower too as I added more features in. It turned into one big mess that some how ran, lol.
This time I really need to buckle down and find a design that will allow me to create a game framework that will be easy to add and remove features (aka things like playing mini-games within my world or a mail system or buddy list or a new public area with interactive items).
I'm thinking that maybe a component based approach MIGHT be what I'm looking for but I'm really not sure. I have read documents on mmorpg design and 2d game engine architecture but nothing really explained a way of designing a game framework that will basically let me "plug-in" new features into the main game.
Hope someone understands what I mean, any help is appreciated.
If you search for component-based systems within games, you will find something quite different to what you are actually asking for. And how best to do this is far from agreed upon just yet, anyway. So I wouldn't recommend doing that. What you're really talking about is not really anything specific to games, never mind MMOs. It's just the ability to write maintainable code which allows for extension and improvements, which was a problem for business software long before games-as-a-service became so popular and important.
I'd say that addressing this problem comes primarily from two things. Firstly, you need a good specification and a resulting design that makes an attempt to understand future requirements, so that the systems you write now are more easily extended when you come to that. No plug-in architecture can work well without a good idea of what exactly you hope to be plugging in. I'm not saying you need to draw up a 100-page design doc, but at the very least you should be brainstorming your ideas and plans and looking for common ground there, so that when you're coding feature A, you are writing it with Future feature B in mind.
Secondly, you need good software engineering principles which mean that your code is easy to work with and use. eg. Read up on the SOLID principles, and take some time to understand why these 5 ideas are useful. Code that follows those rules is a lot easier to twist to whatever future needs you have.
There is a third way to improve your code, but which isn't going to help you just yet: experience. Your code gets better the more you write and the more you learn about coding. It's possible (well, likely) that with an MMO you are biting off a lot more than you can chew. Even teams of qualified professionals end up with unmaintainable messes of code when attempting projects of that magnitude, so it's no surprise that you would, too. But they have messes of code that they managed to see to completion, and often that's what it's about, not about stopping and redesigning whenever the going gets tough.
Yes, I got what you want...
Basically, you will have to use classic OOP design, the same one that business software coders use...
You will first have to lay out the basic engine, that engine should have a "module loader" or a common OOP-style interface, then you either code modules to be loaded (like, as .dlls) or you code directly within your source code, using that mentioned OOP-style interface, and NEVER, EVER allow a module to depend on each other...
The communication, even inside your code, should be ALWAYS using a interface, never put "public" vars in your modules and use it somewhere else, otherwise you will end with a awfull and messy code.
But if you do it properly, you can do some really cool stuff (I for example, changed the entire game library (API that access video, mouse, keyboard, audio...) of my game, in the middle of development... I just needed to recode one file, that was the one that made the interface between logic, and game library...)
What you're thinking about is exactly what this article describes. It's a lovely way to build games as I have blogged about, and the article is an excellent resource to get your started.

do you rely on your memory or consult references and use a lot of intellisense? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have noticed I do not code as much as I use to. Today I dedicate more time to analysis and design, then I communicate that design to programmers. Then they do the coding. This has affected my coding productivity, because I must consult references and rely on intellisense. Things are becoming more complex everyday
Now, here is the irony. If I were to hire a programmer and ask him/her to sit in front of a computer, I may ask to do some coding and I would check abilities. I would evaluate them based on their use of memory vs. consulting references. Maybe I will prefer that programmer who did not consult too much, but who knows what they are doing.
What is your opinion and experience?
I would say that a developer who knows how to find the answers is better than one who has an overall good knowledge already. I find that intellisense is a good tool for finding answers, besides it is too much to remember all method names, arguments, overloads, etc.
I use memory to get me into the right general area (e.g. knowing which classes to use or at least which namespace they'll be in) and then often Intellisense/MSDN for the exact method name or arguments to use.
Having said that, Stack Overflow is improving my ability to code without any references (or even compilation) - I'm sure code will just work out of the box for me more often now than it used to. (I tend to post and then check the code works, add links to MSDN etc - assuming I'm reasonably confident in the approach.)
Someone knowing what resources are available, and how to find the answers, and how to effectively debug - these are qualities I look for now in prospective employees.
I used to consult my memory only, but two things have happened:
Class libraries have gotten larger, so has the number of languages available
The ratio of programming-related memory to personal-life-related memory has shifted away from code
Programming today is also eight times harder than it was when I started. I used to work on 8-bit machines, now I'm working on 64-bit ones. :)
I once was at a job interviewed with the CTO of a company. He asked a question based on a real life problem the company had a while back and solved. It was a multi step problem.
I was standing in front of a whiteboard working through my solution and struggling through a particular part, a part I would use google for before even attempting it, had I been tasked with solving this problem for real instead of for an interview. He asked me at that point, "would you do anything different if this wasn't an interview question." I responded, "Yes. I would exhaust all possibilities of using a third party component for this part of the task and look up the solution, because it is a well defined problem thats been solved several times." There was a bit more discussion where I justified my answer, explained exactly what I would research, and I solved some other parts of the question. In the end I was offered and accepted the job, partly because of knowing how to find out what I didn't know.
Being able to use references is as important as being able to code from memory. Obviously, if you are a one language shop, and want people proficient in that language,the person should be able to write a complete hello world app in notepad. Interview problems should focus on small problems, and one should not worry about small syntax errors. This is why a whiteboard is the best IDE for interview questions.
Unless you demand all your coders use notepad and don't give them internet access, don't be as concerned by the syntax. If you do sit them down in front of a computer, worry about the finished product as well as the technique used to get there.
I'm a PHP programmer in my early 30's. I rely on PHP's excellent documentation, for several reasons:
Programming concepts don't change. If I know what my object models are and how I want to manipulate data, then there's dozens of ways to implement the details. The details are important, but a better grasp of the design and structure is more important
PHP has notoriously inconsistent functions. One string function might use ($needle,$haystack) as parameters, and another might use ($haystack,$needle). Trying to keep them straight isn't worth the hassle when you can just type php.net/function_name and get the reference.
I don't rely on intellisense, simply because I haven't found a decent IDE for PHP that does it well. Eclipse is ok, but it's not fantastic. Netbeans gives me 'PHPDoc not found' for all the built-in PHP functions whenever I install it. There's nothing that I've found so far that beats out the documentation.
The bottom line is that the ability to memorize functions isn't indicative of coding ability. Obviously there's a key set of basic functions that a good programmer will know just from extensive usage over time, but I wouldn't base a hiring decision on whether someone knows substr_replace vs. str_replace from memory.
Because I've read either the documentation, or articles, or a book on a subject, the things I learn on a topic are organized. The result is that, if I can't bring something up from memory, I can probably find it quickly through IntelliSense or the Object Browser.
Worse come to worst, I can pick up the book again; something these youngsters are not being taught to do.
John Saunders
Age 51
Pretty much Google + Old Projects + my memory (of course)
References will not solve your problems though, its only for the nuts and bolts, the higher level of problem solving is the actual "programming" part IMHO.
I tend to use Intellisense and Resharper much more than I used to before, but this has helped my overall productivity. If I can get the idea of how I want to solve something and then use tools to get the more boring parts like class names and function signatures, why shouldn't I use the tools I have? I feel relieved that Jon Skeet has a similar approach it seems.
I rely on my bookmarks and books... and my ability to use them effectively. I have multiple books above my desk, including a copy of the ISO C90 standard. Moreover, I use Xmarks to have access to my bookmarks wherever I go. Sometimes, I make a pdf out of a particular page and upload it to my web-site if it is important enough.
Sometimes the information provided by the resources I use makes its way into my terrible memory... maybe.

Allocating resources for project documentation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
What would you suggest for the following scenario:
A dozen of developers need to build and design a complex system. This design needs to be documented for future developers and the design decisions must be noted. These reports need to be made about every two months. My question is how this project should be documented.
I see two possibilities. Each developer writes about the things they helped design and integrate and then one person combines each of these documents together. The final document will probably be incoherent or redundant at times since the person tasked of assembling everything won't have much time to adjust every part.
Assume that the documentation parts from each developer arrive just a few days before deadline. A collaborative system (i.e. wiki) wouldn’t work properly since there wouldn’t be anything to read until a few days before deadline.
Or should a few people (2-3) be tasked with writing the documentation while the rest of the team works on actually developing the system? The developers would need a way to transfer their design choices and conclusions to the technical writers. How could this be done efficiently?
We approach this from 2 sides, using a RUP style approach. In the first case, you'll have a domain expert who is responsible for roughing out the design of what you're going to deliver - with developers chipping in as necessary. In the second case, we use a technical author - they document the application, so they should have a good idea of how it hangs together, and you involve them right through the design and development process. In this case, they can help to polish the design, and to make sure that it matches what they thought was being developed.
We use confluence (atlassian's wiki-like-thing) and document all kinds of different "things". The developers do it continiously, and we push each other for docs - we let peer pressure decide what is necessary. Whenever someone new comes along he/she is tasked with reading through everything and to find out what still is correct. The incorrect stuff is either deleted or updated as a consequence of this. We're happy when we can delete stuff ;)
The nice thing about this process is that the relevant stuff stays and the irrelevant stuff is deleted. We always "got away" from the more formalized demands by claiming that we could always construct the word documents they wanted if "they" needed them. "They" never needed them.
I think alternative 2 is the less agile, because it means a new stage to the project (although it may be in parallel with tests).
If you are in an agile model, then just add documentation (following a guideline) as a story.
If you are in a staged approach, then I would nevertheless ask developers to work on documentation, following some guidelines, and review that documentation along the design and the code. Eventually, you may have a technical writer reviewing everything for proper English, but that would be a kind of "release" activity.
I think you can use Sand Castle to document your project.
Check it out
Sand Castle from Microsoft
It's not a complete documentation, but making sure that interfaces etc. are commented using Doxygen-style comments means writing code and documenting it are closer together.
That way, developers should document what they do. I still think a review by the architect(s) is needed to ensure consistent quality, but ensuring people document what they do is the best way to ensure they follow the architecture.