Any ideas for a model (class diagram) to track visitors for a website? - tracking

My background idea is to create a software layer that merges all the things related to anonymous visitors:
-uniqueness
-sessions
-geo-localization
-hit counter
-basic reporting
-distributable APIs
-thermal map for forms
Many talk well about Piwik. I'm at the beginning of discovering process so I need to know if I have to reinvent the wheel or lay on the giants' shoulders.
PS: google analytics is an option as now

Piwik is definitely the way to go, the only thing it doesn't do is "-thermal map for forms" but the rest is covered, and more. Full featured API (For analytics reports & for tracking data), nice UI, etc.

Related

Bugtracker - agregation and automated workflow

Intro:
I'm working for a contractor company. We're making SW for different corporate clients, each with their own rules, SW standards etc.
Problem:
The result is, that we are using several bug-tracking systems. The amount of tickets flow is relatively big and the SLA are deadly sometimes. The main problem is, that we are keeping track of these tickets in our own BT (currently Mantis) but we're also communicating with clients in theirs BT. But as it is, two many channels of communication are making too much information noise.
Solution, progress:
Actual solution is an employee having responsibility for synchronizing the streams and keeping track of the SLA and many other things. It's consuming quite a large part of his time (cca 70%) that can be spend on something more valuable. The other thing is, that he is not fast enough and sometimes the sync is not really synced. Some parts of the comments are left only on one system, some are lost completely. (And don't start me at holidays or sickness, that's where the fun begins)
Question:
How to automate this process: aggregating tasks, watching SLA, notifying the right people etc. partially or all together?
Thank you, for your answers.
You need something like Zapier. It can map different applications and synchronize data between them. It works simply:
You create zap (for example between redmine and teamwork).
You configure mapping (how items/attributes in redmine maps to items/attributes in teamwork)
You generate access tokens in both systems and write them to zap.
Zapier makes regular synchronization between redmine and teamwork.
But mantis is not yet supported by Zapier. If all/most of your clients BT are in Zapier's apps list, you may move your own BT to another platform or make a request to Zapier for mantis support.
Another way is develop your own synchronization service that will connect to all client's BTs as each employee using login/password/token and download updates to your own BT. It is hard way and this solution requires continious development to support actual virsions of client's BTs.
You can have a look a Slack : https://slack.com/
It's a great tools for group conversations
Talk, share, and make decisions in open channels across your team, in
private groups for sensitive matters, or use direct messages
one-to-one.
you can have a lot of integrations tools, and you can use Zapier https://zapier.com/ with it to programm triggers.
With differents channels you can notifying the right people partially or all together in group conversation :)
The obvious answer is to create integrations between all of the various BT's. Without knowing what those are, it's hard to say if that's entirely possible. Most modern BTs have an API and support integrations. Some, especially more desktop based ones, don't. For those you probably have to monitor a database directly.
Zapier, as someone already suggested, is a great tool for creating integrations and may already have some of the ones yo need available. I love Slack and it has an API, but messages are basically just text and unless you want to do some kind of delimiting when you post messages to its API, it probably isn't going to work.
I'm not sure what budget is, but it will cost resources to create the integrations. I'd suggest that you hire someone to simply manage these. Someone who's sole responsibility is to cross-populate the internal and the external bug tracking system and track the progress in each. All you really need is someone with good attention to detail for this, they don't have to be a developer. This should be more cost effective than using developer resources on this.
The other alternative is simply to stop. If your requirements dictate that you use your clients' bug tracking software for projects you do for them, just use their software and stop duplicating the effort. If you need some kind of central repository or something for managing work maybe just a simple table somewhere or spreadsheet with the client, the project, the issue number, the status and if possible a link to the issue in the client's BT. I understand the need and desire for centralizing this, but if it's stifling productivity, then the opportunity costs are too high IMO.
If you create an integration tool foe this, you will indeed have a very viable product. This is actually a pretty common problem.

Why should I start using Google Material Design Lite instead of Twitter Bootstrap or Foundation

Disclaimer: I don't want to start any fight against Google Fanboys. I'm just asking because I didn't find a direct answer to my question and maybe someone who already started working with it (or any googledev) can give advice.
Google recently announced Material Design Lite 1.0 and the project has been starred over 9k times on Github in a few days. I read some posts [1, 2] comparing MDL vs Twitter Bootstrap and I don't understand why anyone outside Google's headquarters should consider start working with it.
As they said:
“We challenged ourselves to create a visual language for our users that synthesizes the classic principles of good design with the innovation and possibility of technology and science.” – Google Material Design Introduction
So, why are they releasing MDL? I just don't see the point on wasting time on learning/switching to a new frontend framework which offers much less than the existing ones. MDL looks different, modern and has nicer transitions. That's all? Is there any technical advantage?
The answer is, it depends on you.
Do you want Material Design at all? If so, MDL is well worth looking at. If you clearly don't want MD, then of course MDL isn't your cup of tea.
MDL is not a full framework. We aren't trying to cover every little thing you do in development. We provide Material Design components as closely as we can to the specification. You need to play around with it and see if it has what you need and whether it fits with your design goals.
The technical advantages... Depends on your wants. The component handler is a really cool way to handle modularizing your JS out. This however isn't close to the only option. There are plenty of other ways out there. So once again, you should play around and see if you like it.
Material Design Lite is very enticing for anyone building a site or application targeted at people who are in the Google-verse as it were. Speaking to Android developers/users? Chrome people? Then MDL is an enticing choice of libraries to apply to your site to make them feel at home. However, anyone else who simply wants to add Material Design to their stuff simply because they like it also would see a big benefit from MDL.
At the end of it all, MDL is a tool to help people who want MD implement it into their sites and applications. If you need to ask why you should care, then it seems MDL just isn't for you. That's fine. There is plenty of design choice out there to chose from. Just be consistent within each project you build.

What's the best way to monitor your REST API? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I've created an API based on the RESTful pattern and I was wondering what's the best way to monitor it? Can I somehow gather statistics on each request and how deep could I monitor the requests?
Also, could it be done by using open source software (maybe building my own monitoring service) or do I need to buy third party software?
If it could be achieved by using open source software where do I start?
Start with identifying the core needs that you think monitoring will solve. Try to answer the two questions "What do I want to know?" and "How do I want to act on that information?".
Examples of "What do I want to know?"
Performance over time
Largest API users
Most commonly used API features
Error occurrence in the API
Examples of "How do I want to act on that information?"
Review a dashboard of known measurements
Be alerted when something changes beyond expected bounds
Trace execution that led to that state
Review measurements for the entire lifetime of the system
If you can answer those questions, you can either find the right third party solution that captures the metrics that you're interested in, or inject monitoring probes into the right section of your API that will tell you what you need to do know. I noticed that you're primarily a Laravel user, so it's likely that many of the metrics you want to know can be captured by adding before ( Registering Before Filters On a Controller ) and after ( Registering an After Application Filter ) filters with your application, to measure time for response and successful completion of response. This is where the answers to the first set of questions are most important ( "What do I want to know?" ), as it will guide where and what you measure in your app.
Once you know where you can capture the data, selecting the right tool becomes a matter of choosing between (roughly) two classes of monitoring applications: highly specialized monitoring apps that are tightly bound to the operation of your application, and generalized monitoring software that is more akin to a time series database.
There are no popular (to my knowledge) examples of the highly specialized case that are open source. Many commercial solutions do exist however: NewRelic, Ruxit, DynaTrace, etc. etc. etc. Their function could easily be described to be similar to a remote profiler, with many other functions besides. (Also, don't forget that a more traditional profiler may be useful for collecting some of the information you need - while it definitely will not supplant monitoring your application, there's a lot of valuable information that can be gleaned from profiling even before you go to production.)
On the general side of things, there are many more open source options that I'm personally aware of. The longest lived is Graphite (a great intro to which may be read here: Measure Anything, Measure Everything), which is in pretty common use amongst many. Graphite is by far from the only option however, and you can find many other options like Kibana and InfluxDB should you wish to host yourself.
Many of these open source options also have hosted options available from several providers. Additionally, you'll find that there are many entirely commercial options available in this camp (I'm founder of one, in fact :) - Instrumental ).
Most of these commercial options exist because application owners have found it pretty onerous to run their own monitoring infrastructure on top of running their actual application; maintaining availability of yet another distributed system is not high on many ops personnel's wishlists. :)
(I'm clearly biased for answering this since I co-founded Runscope which I believe is the leader in API Monitoring, so you can take this all with a grain of salt or trust my years of experience working with 1000s of customers specifically on this problem :)
I don't know of any OSS tools specific to REST(ful) API monitoring. General purpose OSS metrics monitoring tools (like Graphite) can definitely help keep tabs on pieces of your API stack, but don't have any API-specific features.
Commercial metrics monitoring tools (like Datadog) or Application Performance Monitoring (APM) tools like (New Relic or AppDynamics) have a few more features specific to API use cases, but none are centered on it. These are a useful part of what we call a "layered monitoring approach": start with high-level API monitoring, and use these other tools (exception trackers, APM, raw logs) to dive into issues when they arise.
So, what API-specific features should you be looking for in an API monitoring tool? We categorize them based on the three factors that you're generally monitoring for: uptime/availability, performance/speed and correctness/data validation.
Uptime Monitoring
At a base level you'll want to know if you're APIs are even available to the clients that need to reach them. For "public" (meaning, available on the public internet, not necessarily publicized...a mobile backend API is public but not necessarily publicized) APIs you'll want to simulate the clients that are calling them as much as possible. If you have a mobile app, it's likely the API needs to be available around the world. So at a bare minimum, your API monitoring tool should allow you to run tests from multiple locations. If your API can't be reached from a location, you'll want notifications via email, Slack, etc.
If your API is on a private network (corporate firewall, staging environment, local machine, etc.) you'll want to be able to "see" it as well. There are a variety of approaches for this (agents, VPNs, etc.) just make sure you use one your IT department signs off on.
Global distribution of testing agents is an expensive setup if you're self-hosting, building in-house or using an OSS tool. You need to make sure each remote location you set up (preferably outside your main cluster) is highly-available and fully-monitored as well. This can get expensive and time-consuming very quickly.
Performance Monitoring
Once you've verified your APIs are accessible, then you'll want to start measuring how fast they are performing to make sure they're not slowing down the apps that consume them. Raw response times is the bare minimum metric you should be tracking, but not always the most useful. Consider cases where multiple API calls are aggregated into a view for the user, or actions by the user generate dynamic or rarely called data that may not be present in a caching layer yet. These multi-step tasks or workflows are can be difficult to monitor with APM or metrics-based tools as they don't have the capabilities to understand the content of the API calls, only their existence.
Externally monitoring for speed is also important to get the most accurate representation of performance. If the monitoring agent sits inside your code or on the same server, it's unlikely it's taking into account all the factors that an actual client experiences when making a call. Things like DNS resolution, SSL negotiation, load balancing, caching, etc.
Correctness and Data Validation
What good is an API that's up and fast if it's returning the wrong data? This scenario is very common and is ultimately a far worse user experience. People understand "down"...they don't understand why an app is showing them the wrong data. A good API monitoring tool will allow you to do deep inspection of the message payloads going back and forth. JSON and XML parsing, complex assertions, schema validation, data extractions, dynamic variables, multi-step monitors and more are required to fully validate the data being sent back and forth is Correct.
It's also important to validate how clients authenticate with your API. Good API-specific monitoring tools will understand OAuth, mutual authentication with client certificates, token authentication, etc.
Hopefully this gives you a sense of why API monitoring is different from "traditional" metrics, APM and logging tools, and how they can all play together to get a complete picture of your application is performing.
I am using runscope.com for my company. If you want something free apicombo.com also can do.
Basically you can create a test for your API endpoint to validate the payload, response time, status code, etc. Then you can schedule the test to run. They also provide some basic statistics.
I've tried several applications and methods to do that, and the best (for my company and our related projects) is to log key=value pairs (atomic entries with all the information associated with this operation like IP source, operation result, elapsed time, etc... on specific log files for each node/server) and then monitorize with Splunk. With your REST and json data maybe your aproach will be different, but it's also well supported.
It's pretty easy to install and setup. You can monitor (almost) real time data (responses times, operation results), send notifications on events and do some DWH (and many other things, there are lots of plugins).
It's not open source but you can try it for free if you use less than 50MB logs per day (that's how it worked some time ago, since now I'm on enterprise license I'm not 100% sure).
Here is a little tutorial explaining how to achieve what you are looking for: http://blogs.splunk.com/2013/06/18/getting-data-from-your-rest-apis-into-splunk/

How to prevent licensed video/document sharing?

If a website is to provide non-free content in terms of videos and documents, how is one about to serve this kind of content? I don't want to write an occasionally connected desktop application (similar to iTunes) that prevents to easily share bought content so I have a few questions about this:
Documents: What's best document format for this kind of scenario that would prevent one to share it freely after it was bought?
If I think of PDF it's a great format so people can't temper with its content and they will actually see what you created, but the problem is that after one obtains it, it can easily get shared with others. Either having password or not.
Videos: If supporting video it's probably wise to use some public (may be payable as well) service that can handle video streaming like YouTube (which is AFAIK not able to have a non-public videos).
I'm well aware that there is no 100% perfect solution that prevents it all, but having a 90% successfully locked down content is still better than hoping people won't share something that can easily be shared.
Can you list a few websites that do a similar thing. I may learn a lot from them. And please provide some guidelines I should follow in this regard.
Is there maybe a similar to PDF document format that has built in security capabilities? Commercial even... It should support some kind of authorization functionality within that would work similarly to any software activation.
Note: I wasn't sure whether this should be posted on webapps.stackexchange.com or here but I decided it should be posted here because it's related to development. I'm generally interested in programmable approaches I could use.

Google Chart APIs for plotting data in local machine

Preface:
I am a system programmer (who has just started his career as a S/W Engineer), so not very good # web scripting languages, though I have just started learning them.
Problem Synopsis:
I want to write an app that keeps track of what I am doing and records it allowing me to analyse my time spending pattern and could help me analyse whenever I want.
Problem Description:
My plan is to write an app that sits in the background and keeps track of active window on my desktop (every second) and stores this data in a SQLite database. But to be more appealing (As I want to share this app with others), I want to have a feature where the user can analyse data recorded between any period of his interest. For this I want the user to be able to generate charts and graphs using the recorded data.
For this I thought of using browser for UI and Google Visualization APIs for plotting work. So, is it possible to use Google Visualization APIs to plot local data? if so, plz guide me on how to continue... (As told before, I am a system programmer, C programmer to be specific, who has just started learning web scripting in free time)
Reasons for these decisions:
(1) App that records what I am doing will be in C/C++ - B'coz I am system programmer and am very comfortable with them. And can get it done easily and quickly.
(2) SQLite - Very small and can easily be embedded in my app, and is Open Source. And I think many web scripting langs like PHP, python have interfaces to access SQLite DB.
(3) Browser for UI - Hope it will be easy for user to use browser, and I will not have much to do regarding UI. As main UI will be browser and Google Visualization APIs will do plotting. All I might have to do is write few lines of script (Am I right here on the last point???)
Comment on my design decision and any tutorials(or pointers) which teaches me on how to do this will be highly appreciated...
Thank You
MicroKernel :)
PS: Idea inspired by Nathan Baulch's reply to https://stackoverflow.com/questions/161590/how-do-you-track-your-time
#Nathan Baulch, Thank you so much for such a brilliant idea. \m/
I would embed browser to app (you want to write in C#)
and use jquery plotting as chart. You will find more info here: http://www.flotcharts.org/