How to setup wiki for emergency response? - structure

Company I work for is currently preparing to expand and take on emergency support (after hours support).
We currently have a wiki setup with a lot of information.
However, due to the fact that bits and pieces are scattered across entire wiki (dependable on what department etc).
Whoever is on support that night/weekend needs to quickly be able to help customer with their problems. eg. if our server is being very slow there needs to be troubleshooting guide of some sort so that person can dig straight into it.
I have googled quite a bit but I was not able to find anything useful. So here is a question:
How would you structure your wiki (by topic, by symptoms, by solutions?) to minimize time person has to look for information?
Personally, I think using some sort of syntax such as
Symptom: large CPU utilization
Keywords: slow server, large cpu usage
That way when you search through wiki it would most likely come up in search. But what if issue is more software related - such as miss-configuration?

This is wiki API url
http://en.wikipedia.org/w/api.php
Also you need to have a Web Service which will accept the params on which search will be performed as well as UI page.
Web Service will have all the logics which ever needed.
Is there a Wikipedia API?

Related

What's the best way to monitor your REST API? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I've created an API based on the RESTful pattern and I was wondering what's the best way to monitor it? Can I somehow gather statistics on each request and how deep could I monitor the requests?
Also, could it be done by using open source software (maybe building my own monitoring service) or do I need to buy third party software?
If it could be achieved by using open source software where do I start?
Start with identifying the core needs that you think monitoring will solve. Try to answer the two questions "What do I want to know?" and "How do I want to act on that information?".
Examples of "What do I want to know?"
Performance over time
Largest API users
Most commonly used API features
Error occurrence in the API
Examples of "How do I want to act on that information?"
Review a dashboard of known measurements
Be alerted when something changes beyond expected bounds
Trace execution that led to that state
Review measurements for the entire lifetime of the system
If you can answer those questions, you can either find the right third party solution that captures the metrics that you're interested in, or inject monitoring probes into the right section of your API that will tell you what you need to do know. I noticed that you're primarily a Laravel user, so it's likely that many of the metrics you want to know can be captured by adding before ( Registering Before Filters On a Controller ) and after ( Registering an After Application Filter ) filters with your application, to measure time for response and successful completion of response. This is where the answers to the first set of questions are most important ( "What do I want to know?" ), as it will guide where and what you measure in your app.
Once you know where you can capture the data, selecting the right tool becomes a matter of choosing between (roughly) two classes of monitoring applications: highly specialized monitoring apps that are tightly bound to the operation of your application, and generalized monitoring software that is more akin to a time series database.
There are no popular (to my knowledge) examples of the highly specialized case that are open source. Many commercial solutions do exist however: NewRelic, Ruxit, DynaTrace, etc. etc. etc. Their function could easily be described to be similar to a remote profiler, with many other functions besides. (Also, don't forget that a more traditional profiler may be useful for collecting some of the information you need - while it definitely will not supplant monitoring your application, there's a lot of valuable information that can be gleaned from profiling even before you go to production.)
On the general side of things, there are many more open source options that I'm personally aware of. The longest lived is Graphite (a great intro to which may be read here: Measure Anything, Measure Everything), which is in pretty common use amongst many. Graphite is by far from the only option however, and you can find many other options like Kibana and InfluxDB should you wish to host yourself.
Many of these open source options also have hosted options available from several providers. Additionally, you'll find that there are many entirely commercial options available in this camp (I'm founder of one, in fact :) - Instrumental ).
Most of these commercial options exist because application owners have found it pretty onerous to run their own monitoring infrastructure on top of running their actual application; maintaining availability of yet another distributed system is not high on many ops personnel's wishlists. :)
(I'm clearly biased for answering this since I co-founded Runscope which I believe is the leader in API Monitoring, so you can take this all with a grain of salt or trust my years of experience working with 1000s of customers specifically on this problem :)
I don't know of any OSS tools specific to REST(ful) API monitoring. General purpose OSS metrics monitoring tools (like Graphite) can definitely help keep tabs on pieces of your API stack, but don't have any API-specific features.
Commercial metrics monitoring tools (like Datadog) or Application Performance Monitoring (APM) tools like (New Relic or AppDynamics) have a few more features specific to API use cases, but none are centered on it. These are a useful part of what we call a "layered monitoring approach": start with high-level API monitoring, and use these other tools (exception trackers, APM, raw logs) to dive into issues when they arise.
So, what API-specific features should you be looking for in an API monitoring tool? We categorize them based on the three factors that you're generally monitoring for: uptime/availability, performance/speed and correctness/data validation.
Uptime Monitoring
At a base level you'll want to know if you're APIs are even available to the clients that need to reach them. For "public" (meaning, available on the public internet, not necessarily publicized...a mobile backend API is public but not necessarily publicized) APIs you'll want to simulate the clients that are calling them as much as possible. If you have a mobile app, it's likely the API needs to be available around the world. So at a bare minimum, your API monitoring tool should allow you to run tests from multiple locations. If your API can't be reached from a location, you'll want notifications via email, Slack, etc.
If your API is on a private network (corporate firewall, staging environment, local machine, etc.) you'll want to be able to "see" it as well. There are a variety of approaches for this (agents, VPNs, etc.) just make sure you use one your IT department signs off on.
Global distribution of testing agents is an expensive setup if you're self-hosting, building in-house or using an OSS tool. You need to make sure each remote location you set up (preferably outside your main cluster) is highly-available and fully-monitored as well. This can get expensive and time-consuming very quickly.
Performance Monitoring
Once you've verified your APIs are accessible, then you'll want to start measuring how fast they are performing to make sure they're not slowing down the apps that consume them. Raw response times is the bare minimum metric you should be tracking, but not always the most useful. Consider cases where multiple API calls are aggregated into a view for the user, or actions by the user generate dynamic or rarely called data that may not be present in a caching layer yet. These multi-step tasks or workflows are can be difficult to monitor with APM or metrics-based tools as they don't have the capabilities to understand the content of the API calls, only their existence.
Externally monitoring for speed is also important to get the most accurate representation of performance. If the monitoring agent sits inside your code or on the same server, it's unlikely it's taking into account all the factors that an actual client experiences when making a call. Things like DNS resolution, SSL negotiation, load balancing, caching, etc.
Correctness and Data Validation
What good is an API that's up and fast if it's returning the wrong data? This scenario is very common and is ultimately a far worse user experience. People understand "down"...they don't understand why an app is showing them the wrong data. A good API monitoring tool will allow you to do deep inspection of the message payloads going back and forth. JSON and XML parsing, complex assertions, schema validation, data extractions, dynamic variables, multi-step monitors and more are required to fully validate the data being sent back and forth is Correct.
It's also important to validate how clients authenticate with your API. Good API-specific monitoring tools will understand OAuth, mutual authentication with client certificates, token authentication, etc.
Hopefully this gives you a sense of why API monitoring is different from "traditional" metrics, APM and logging tools, and how they can all play together to get a complete picture of your application is performing.
I am using runscope.com for my company. If you want something free apicombo.com also can do.
Basically you can create a test for your API endpoint to validate the payload, response time, status code, etc. Then you can schedule the test to run. They also provide some basic statistics.
I've tried several applications and methods to do that, and the best (for my company and our related projects) is to log key=value pairs (atomic entries with all the information associated with this operation like IP source, operation result, elapsed time, etc... on specific log files for each node/server) and then monitorize with Splunk. With your REST and json data maybe your aproach will be different, but it's also well supported.
It's pretty easy to install and setup. You can monitor (almost) real time data (responses times, operation results), send notifications on events and do some DWH (and many other things, there are lots of plugins).
It's not open source but you can try it for free if you use less than 50MB logs per day (that's how it worked some time ago, since now I'm on enterprise license I'm not 100% sure).
Here is a little tutorial explaining how to achieve what you are looking for: http://blogs.splunk.com/2013/06/18/getting-data-from-your-rest-apis-into-splunk/

SQL installation on Amazon Web Services

Folks, I have question this morning that hopefully one of you techies can answer – during past few months, I have been heavily involved in preparing several SQL certifications study guides as it’s my desire to secure Microsoft Certified Solutions Associate (MCSA) or associate level. While I have previous experiences within this skill set and wanted to sharpen it by obtaining further experiences and hopefully securing this certification, it has been quite challenging setting up a home lab that allows me to create environment similar to what the big dogs use nowadays – windows server/several sql instances/virtualization and all that – due to lack of proper hardware or cost. In any case, my question today is to seek your advices and guidance on other possible options, particularly if this task can be accomplished using Amazons AWS – I understand they offer some level of space that can be used as playground or if one want to extend the capacity, subscription is an option. So, if I was to subscribe the paid version of it, is it possible to install all software needed to practice and experiment all needed technologies to complete and or master contents on the training kit. Again, I’m already using my small home network and have all proper software, but just feel that it’s not enough as some areas require higher computing power to properly test or rung specific areas..
Short: Yes
You can create a micro instance for free and install whatever you want on it. If your not familiar with using the CLI, it can be a bit daunting but there are plenty of guides online.
They also offer an RDS service where, they will allow you to set up a database instance and will maintain it for you but it's not free.
Edit
Link to there MS Server Page
http://aws.amazon.com/windows/
Azure is the windows cloud service, I think the comment was have you considered looking at azure instead of AWS

Easiest API to learn/methdology to create web applications for running mapreduce on hadoop?

I have hadoop 1.0.4 running on my ubuntu 11.04,configured with eclipse I want to make a web application to run hadoop jobs, or may be Cassandra,Hbase and Hive might be a way but I don't have much time to learn thoroughly all these and I want to do it as quickly as possible.Any advice which one might prove the easiest to get started with ?
I don't know if this question really qualifies to be here on SO in its current form. This is the reason I did not write this initially. But, a lot of SO experts are out there to decide this(they can do it much better than me) :)
Having said that, I would like to share a few things with you based on my personal experience, so that you proceed towards the correct path. First of all, Hadoop jobs(MapReduce) and Hive are actually not a good fit for web services kinda use cases. They are most suitable for offline, batch processing kinda stuff. HBase/Cassandra can be used though, if you have real time needs(like web services).
Coming back to your actual question. Before diving into Hadoop, Hive, HBase etc, I would suggest you to get some hold on web services first(if you are new to web services as well). Reason being, a web service is something which has much wider scope of applicability as compared to tools like Hadoop, Hive, HBase etc. These tools are specific to some particular use cases and cannot be used everywhere. But, web services are used almost everywhere and with n number of different things, like RDBMSs, NoSQL datastores etc etc. So if you know web service concepts you definitely have that extra edge. To begin with you can visit these links :
Web Services Tutorial by W3Schools(Nice n easy. Would serve the quick start guide purpose).
For a detailed tutorial you can visit the oracle web services tutorial.
This link by IBM developerworks has references to some really good web services learning stuff.
You might find this one really helpful to start with(Shows how to create web services using Eclipse).
And you can obviously Google web service tutorials anytime.
One last thing. Although it's not mandatory to be a pro in things like Hadoop, Hive, HBase etc, but having some decent amount of understanding of the concepts would be really helpful in developing your solution in a much better manner. It'll allow you to think accurately in the correct direction.
HTH.

Your Daily Schedule

I try to devote certain time everyday to learn new skills while also improve the older skills that I've gained. But, I'm not hitting the sweet spot where I learn what I want and get things done.
So, I was just curious to know how you guys spend time everyday and "hit the spot".
I use Google reader and just add more and more tech blogs as I find them. Then, I read them in the morning with coffee. This site provides quite a bit of education as well.
I read technical websites everyday. I like to use Google reader as well since it tracks what I've read and allows me to easily continue where I left off from any computer I happen to have access to.
I started out with the basics : Slashdot, Ars Technica
, and Dr. Dobb's Journal. These sites will frequently lead to other great sources of information.
When following sites like this using an RSS feed, you don't have to read every article that comes through. Just scan them over and read the ones that catch your interest. Without realizing it, you will store away alot of information that will pop back into your head when you encounter a situation that triggers it.
You won't necessarily be a master of everything you read but you will be at least aware of current developments and technologies.
The second part is to practice. I usually have some simple and enjoyable programming project on the go at home all the time. I may not actually complete anything useful but I use it as a basis to try out new things. Alot of times I will encounter a problem at work and find that I've already explored some of the solutions at home or at least thought about them and will be able to make a much more informed decision.
Tech blogs are a great way to find out basic information on new material and sometimes they will even have a more in-depth feature that can leave you with some take-away knowledge than just another tech headline.
What works best for me is to identify topics that I have read in tech blogs that interest me and then find sources that provide me with more advanced information on the subject. Then as the week goes by spend however much time needed to digest the information learning the material.
Just browsing tech headlines all day, in my opinion, won't leave you with any distilled information beyond basic advances in technology. Actually diving in and spending X amount of time each day learning material that is interesting to you will be infinitely better.
I find the key to getting things done is to manage distrations. I process email in batches only once or twice per day, and try to avoid phone calls, meetings and IM where I can. This leaves me bigger chinks of time available for development.
I learn new skills in a couple of different ways... Firstly reading a load of developmetn related tech blogs via an RSS reader. This allows me to absorb a continuous flow of new ideas without necessarily "learning" a lot, but saves me a lot of time on research later. Then when I run into something new, chances are something I've already read will be of assistance and I can dig in an "learn" something in more detail.
e.g. Through regular RSS reading I became aware that a significant number of javascript frameworks had been developed and it seemed to make more sense to use a js framework than coding DOM manipulation by hand. Later when I started a new project that was going to be heavy on Ajax, I was able to name a couple of frameworks off the top of my head,quickly pick one and really dig in and get to know it.

Getting developers to use a wiki [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I work on a complex application where different teams work on their own modules with a degree of overlap. A while back we got a Mediawiki instance set up, partly at my prompting. I have a hard job getting people to actually use it, let alone contribute.
I can see a lot of benefit in sharing information. It may at least reduce the times we reinvent the wheel.
The wiki is not very structured, but I'm not sure that is a problem as long as you can search for what you need.
Any hints?
Some tips:
Any time someone sends information by email that really should be in a wiki, make a page for that topic and add what they put in the email. Then reply "Thanks for that info, I've put it into the wiki here so that it's easier to find in the future."
Likewise, if you have information you need to share that should be in the wiki, put it there and just send an email with a link to it, rather than email people.
When you ask people for information, phrase it so that putting such documentation in the wiki should be considered the default or standard: "I searched in the wiki but I couldn't find it. Have you put that info up there yet?"
If you are the "wiki champion", make sure other people know how to use it, e.g. "Did I go through how to create a new page with you yet?"
Edit the sidebar to make sure it is relevant to your work.
Use "nav box" style templates on related pages for easier navigation.
Put something like {{Special:NewPages/5}} on the front page, or recent changes, so that people can see the activity.
Take a peek at Recent changes every few days or week, and if you notice someone adding information without being prodded, send them an email or drop by and give them a little compliment.
As I mentioned before, a Wiki is very unorganized.
However, if that is the only argument from your developers, then invest some effort to create a simple index page and keep it updated (either do it yourself or ask people to link their contributions to the index). That way, the Wiki might grow into a very nice and quite comprehensive collection of documentation for all your work.
We've been using a wiki in some form or another for a while now, but it does take a while for people to get on board. You might find that you will be the only one writing articles for some time, but bear with it, other people will come on board eventually.
If someone sends an email around that contains information related to the project then helpfully point them in the direction of the wiki - and keep doing that - they should get the hint.
We have a SharePoint portal and use the wiki from there - we customised it with our own branding so that it "looks the part" - I really feel this has helped to improve the uptake of it.
Make sure that everyone is aware that the wiki is even more informal than email.... because there will be a "fear factor" that people may think anything they add to the wiki will be over-analysed.
I think most of the answers so far are spot on - the more you plug away at it yourself, the larger the body of useful information will become, so slowly but surely people will naturally start to use it.
The other approach you could use is this: Suggest that every time someone asks another team member a question about the project, they should answer the question as normal, but also add the answer to a section of the Wiki. This may take a few minutes extra, but it will mean that the next time someone asks the same question (which they inevitably will), you can save time by pointing them at the Wiki. This, in turn, should help people to start using the Wiki as a first source of information and help overall up-take.
You can't force developers to do something they do not have an incentive of using for; unfortunately wikis, like documentation (well, in fact wikis are documentation) rarely have any "cool" value for developers. Besides, they're already deep into dev work -- could you really bother them with a wiki?
That being said, the people who pushed for the wiki (e.g., you) should be primarily responsible for updating it, and you really would have a lot of work cut out for you if you're serious about it.
You might also try the ff:
It's not very structured you say -- a lot of people get turned off from ill-structured (hard-to-search/browse) wikis. So maybe you can fix that first
Maybe you can ask lead developers/project managers to populate it with things that are issues for them: things like code conventions and API design for your particular project
Lead by example: religiously document your part of the system. Setting a precedent may encourage others to do the same
Sell the idea of using the wiki to the developers. You've identified some benefits, share those with the developers. If they can see that they'll get something of value out of it they'll start using it.
Example advantages from What Is a Wiki
Good for writing down quick ideas or longer ones, giving you more time for formal writing and editing.
Instantly collaborative without emailing documents, keeping the group in sync.
Accessible from anywhere with a web connection (if you don't mind writing in web-browser text forms).
Your archive, because every page revision is kept.
Exciting, immediate, and empowering--everyone has a say.
I have done some selling and even run some training sessions. I think some people are turned off by the lack of WYSIWYG editing and ability to paste formatted text from Word or Outlook. I know there are some tools to work around these, but they are still barriers.
There are some areas where the wiki is being used to log certain areas, but people who update those are not doing anything else with it.
I will use the wiki to document my specialised area regardless as it acts as a convenient brain extension. When starting a new development I use it as a notepad for ideas that I can expand on as it progresses.
It would help if management would give it some vocal support, even if it is not made mandatory.
I have a hard job getting people to actually use it, let alone contribute.
One of the easiest ways to get people to contribute to a wiki, is to actually have them provide contents in a wiki-suitable fashion, i.e. so that whatever they post using their usual channels of communications (newsgroups, mailing lists, forums, issue trackers, chat), is basically suitable for inclusion on the wiki.
So that others (users/volunteers) can simply take such contents and put them on the wiki.
This sounds more complicated than it really is, it's mostly about generalizing questions and answers, so that they are not necessarily part of a conversation, but can be comprehensible, meaningful and useful in a standalone fashion.
For example a question like the following:
how do I get git to clone a remote repository???
Can be answered like this:
Hello,
Just use git clone git://...
But questions can also be answered in a less personal style:
In order to clone a git repository, you will want to use the clone parameter to git:
git clone git://....
What I am trying to say is that most discussions in a project can and should be easily used to become documentation eventually. With this sort of mindset, your documentation can actually grow rather rapidly. You only need to get people to keep in mind that useful information should be ideally provided in a fashion that is suitable for wiki inclusion.
I have witnessed several instances where open source projects started to use this approach to some extent and while some people (largely new users) complained that answers were not very personal, the body of documentation was increasing steadily, because other people simply monitored such discussions and started to copy/paste such responses to the wiki.
Basically, this is one of the easiest ways to get people to contribute to a wiki, without requiring them to actually use it themselves, the only thing that's required of them is a shift in thinking.
If the developers still need to maintain 'real' documentation (s.a. Word documents), I see no way to meaningfully duplicate that on a Wiki.
It does not make sense for people to write twice
Any duplicated data is prone to get out of sync, soon.
What my current customer has done is move all this to Wiki. So I only document once, and I do it on the Wiki.
This is okay. Working with Wiki is more tedious than with Word, but at least the doc is online and others can mix-and-match with it.
Another working solution (imho) would be to store docs alongside the source, on subversion. But then the merging system needs to be able to cope with rich text etc. as well. I don't know, if any solution for that exists (other than using HTML or LaTex, which actually would not be bad picks).
Find "sticky" items (sub-3 pg. docs / diagrams / etc) something that the team seems to be creating again and again & post it on the wiki. Make sure everyone has access to the wiki and knows its there - set up a notification mechanism if possible. With some luck, the next time they have to access, rather than dig it out of version control or their machines - they should hit the wiki.
If they still don't, try to see if the team has enough slack to actually use the wiki - Subtler issues may lie beneath their reluctance.
Take a look at the advice at http://www.ikiw.org/ Grow your Wiki
Just to add to some of the excellent advice being offered here...
As a dev in a small company that does largely gov't contract work in the 6-24 month range, I find that my time is often split between development and writing status reports (right up there with writing documentation, only worse!) Having a wiki to slap down unorganized thoughts and notes as we go along has made report-writing a lot less painful (not pain-LESS, but better all the same).
Further, if you're already in the Mediawiki world, you might want to look at SemanticMediawiki. It allows you to take the organization of your data to another level by semantically tagging it. That doesn't mean a lot on its own, I know, but I can tell you (for example) that it can drastically improve the relevance of the data returned from searches. It is definitely worth a look.
Generally good advice here. I'd like to add:
You really need a champion - someone pushing this to developers and management (without being pushy - that's a challenge!) and providing support & tutorials when possible. This person also needs to be a peer (so a fellow developer, not someone in a remote IT department) and really customer focused i.e. ready to make changes when requested.
Speaking of changes, some people here say wikis are unstructured. I disagree. Our MediaWiki installation is structured using categories, particularly with two extensions:WarnNoCategories (to require users to add a category when saving a page) and CategoryTree to show how all the categories fit together (this can be linked to from the sidebar). I've got more tips on how we keep this low threshold, if you're interested.