We are in the process of deciding if we go for Omniture or Google Analytics.
Some information regarding GA seems outdated on the Net, and it is not easy to find the relevant answers to our questions.
In particular, I would appreciate some pointers regarding, in Google Analytics
is there a limitation of the number of custom variables?
is there a limitation of the type of variables that can be used?
and besides,
what is your experience in the delay between the moment the data is recorded GA side, and the time it is made available to the GA account (read 2~10 hours?)
Thanks
There are 5 custom variable slots. Any given pageview/visit/visitor can only occupy up to 5. In theory, you could have thousands of different variables, but the slots are overriding. ie, you can't store 'Is Logged In' in the same slot as 'Is Paid User' if you want to be able to track both on the same pageview, session, or user. But, you could use the same slot for mutually-exclusive variables that you know won't ever overlap (like, 'banned user' and 'Admin').
There's also a 6th possible variable value known as "User Defined Variable" (called by _setVar), which is the deprecated ancestor to Custom Variables, but for backwards compatibility reasons will likely always be around. It is a single slot, visitor level, that lets you define one key-value pair.
The 'type' is basically any key-value string pair, with a limitation that the combined length of any given custom variable's key and value cannot exceed 128 characters. You can set the scope of the custom variable to be at the page-level (pageview), session-level (visit), or user-level (visitor).
The length of time for data processing is inconsistent. Sometimes, the most basic data from pageviews, transactions and events appears within minutes, but then some of the accompanying data (source information, custom variable values, etc) does not process for another few hours. Only on vary rare occasions does it take longer than 24 hours for a full snapshot of a day to be available.
I would like to add that GA and SC are in no way comparable products when you are talking about measurement you want to base decisions on.
GA wins hands down on setup and configuration (and no cost, especially no extra costs), but if you want to measure anything on visitor level, need real time figures or want to have any support, choose something else than GA. Based on you question and the very informative answer provided, I think you did.
In Google Universal Analytics, you can set up to 20 "Custom Dimensions" and "Custom Metrics", see https://developers.google.com/analytics/devguides/collection/analyticsjs/custom-dims-mets
These enable you do just about everything you currently do with custom variables. Only downside is that they are not displayed in any standard reports, but are very powerful when used in custom reports.
Related
I need to use some new drills using unmodified original .MIN CNC programs for Okuma Thinc controller, MU6300V. I'm looking to use the Okuma API to detect when tool group 4 is loaded into the spindle and then alter the speed/feed when it drills. I am familiar with the API and .NET. Looking for some general guidance on objects/methods and approach.
If this is too difficult then I would settle for just modifying the feed rate when a G81 drill cycle is called for a tool in group 4.
The first part of your request is pretty straight-forward.
// Current Tool Number
Okuma.CMDATAPI.DataAPI.CTools.GetCurrentToolNumber();
// Group number of current tool
Okuma.CMDATAPI.DataAPI.CTools.GetGroupNo(CurrentToolNumber);
Altering the drill feed / speed will be more troublesome however.
You cannot set feed/speed overrides using the API.
That is, not without some additional hardware and special options.
Other people have done it actually.
Have you ever seen Caron Engineering's Tool Monitoring Adaptive Control?
Because I think that is essentially what you're asking for.
https://www.caroneng.com/products/tmac
The only other option you have is altering your part program to look for common variable values to set spindle speed and/or feed rate.
For Example
Use one variable to determine if fixed or variable value should be used, and another for the variable value
That way, on a machine that has your old drills and no THINC Application altering common variables, the fixed values are used. But, on a machine that has the application, it can look at the tool number or group and set a common variable that determines specific speed/feed values. Then those new values are used before starting the spindle and moving into the cut.
The choices available for changing feed/speed after the machine has entered a cut or commanded the spindle to run are:
Human operator at the control panel
TMAC
I'm still in college and I'm trying my hand at designing my own applications, for practice and also for funsies, but I'm having some big questions.
Currently, I'm attempting to design an application that uses a relational database backend to store records related to a pen-and-paper RPG that a friend and I have been designing. It will need to store characters, weapons, items, etc. Since it's based off of a sci-fi universe, there are guns, etc.
Now, I'm stuck in the conceptual stages here because I'm not sure how I would store some of the weirder to grasp types of information here. Since it's a tabletop RPG, there are dice involved, typically referred to as D4, D6, D10, D20, etc. and a lot of these weapons, for example, have several kinds of attacks each (they're guns, so it's like firing modes, etc.) and a typical attack would be something like "D20 + 20."
Now, I know that I could just store it as a string variable, but I was hoping to design this in such a way that I could actually add some dice-rolling/etc. functionality to it. Is there a simple or effective way of storing a Math.random variable (not the result, mind you, but the actual range number) in a SQL record so that I could just grab it and use it real quick?
For extra context, I was hoping to have one table of the actual weapon templates & stats and another table of just actual instances of those weapons, so I could keep track of ammo in each gun, who owns it, etc.
I'm using Netbeans and a Derby database. Thanks for any help you guys.
As stated above, I don't know why you just wouldn't create a java/C#/any programming language application that can simulate the dice rolls for you. I mean, you could integrate the database into the application to retrieve information. Otherwise just simply make the ability to input information on weapons/Armour into the application in the form of popup dialog boxes (Or something along those lines).
A database is primarily used to store information in a structured way and automatically updates this information as needed. What you are suggesting to do is more dynamic and has nothing to do with storing information and more so with actually playing the game. Not wanting to change your idea about creating it. Just creating an actual application that utilizes a database can be written in a language other than SQL. (And much easier to do it this way as well.)
Your question is very broad, but I would not store a descriptive characteristic like "D20 + 20" in your database only yo parse it out in the app. Instead store that as two or three (depending on what it represents) attributes (columns) in your database, and let the app display it appropriately.
I don't know exactly what you mean by storing "equations" and "RNGs" in your database, but those belong in the application, not the database. You can store inputs or parameters that guide those equations, but not the equations themselves.
I'm building an API and I have a question about how to represent objects.
Imagine we have a system with Articles that have a bunch of properties. Some of these properties are complex, for example the Author of the Article refers to another object. We have an URL to fetch all the articles in the system, and another URL to fetch a particular Article.
My first approach to implement this would be to create two representations of the same object Article, because when you request all the articles, it makes sense that you don't retrieve all the information about the Articles, but for example just the title, the date and the name of the author (instead of the whole Author object), excluding other properties like tags, or the content. The idea beneath this is to try to make the response of all the Articles a little bit lighter.
Now I'm going to the client side, and I decide to implement a SDK for Android, for example. So the first step would be to create the objects to store the information that I retrieve from the API. Now a problem pops up, because I want to define the Article object, but I would need two versions of it and it's not only more difficult to implement, but it's going to be more difficult to use.
So my question is, when defining an API, is it a good practice to have multiple versions of the same object (maybe a light one, and a full one) to save some bandwidth when sending the result of a request but generating a more difficult to use service, or it's not worth it and you should retrieve always the same version of the object, generating heavier responses but making the service easier to use?
I work at a company that deals with Articles as well and we also have a REST API to expose the data.
I think you're on the right track, but I'll even take it one step further. These are the potential three calls for large entities in an API:
Index. For the articles, this would be something like /articles. It just returns a list of article ids. You can add parameters to filter, sort, etc. It's very lightweight and I've found it to be very useful.
Header/Mini/Light version. These are only the crucial fields that you think will meet the widest variety of use cases. For us, we have a lot of use cases where we might want to display the top 5 articles, and in those cases, only title, author and maybe publication date. Those fields belong in a "header" article, or a "light" article. This is especially useful for AJAX calls as you don't want to return the entire article (for us the object is quite large.)
Full version. This is the full article. All the text/paragraphs/image references - everything. It's a heavy call to make, but you will be guaranteed to get whatever is available.
Then it just takes discipline to leave the objects the way they are. Ideally users are able to get the version described in (2) to save time over the wire, but if they have to, they go with (3).
I've considered having a dynamic way to return only fields people are interested in, but it would be a lot of implementation. Basically the idea was to let the user go to /article and then show them a sample JSON result. Then the user could click on the fields they wanted returned and get a token. Then they'd pass the token as a parameter to the API and the API would then know which fields to return.
Creates a dynamic schema. Lots of work and I never got around to it, but you can see that if you want to be creative, you can.
Consider whether your data (for one API client) is changing a lot or not. If it's possible to cache data on the client, that'll improve performance by not contacting the API as much. Otherwise I think it's a good idea to have a light-weight and full-scale object type (or more like two views of the same object type).
In the client you should implement it as one object type (to keep it DRY; Don't Repeat Yourself) with all the properties. When fetching a light-weight object, you only store a few of the properties, the rest being null (or similar “undefined” value for the given property type). It should be possible to determine whether all or only a partial subset of the properties are loaded.
When making API requests in the client on a given model (ie. authors) you should be explicit about whether the light-weight or full-scale object is needed and whether cached data is acceptable. This makes it possible to control the data in the UI layer. For example a list of authors might only need to display a name and a number of articles connected with that author. When displaying the author screen, more properties are needed. Also, if using cached data, you should provide a way for the user to refresh it.
When the app works you can start to implement optimizations like: Don't fetch light-weight data if full-scala data is already known & Don't fetch data at all if a recent cache copy exists. I think the best is to look at the actual use cases and improve performance with the highest value for the user.
Is google analytics custom variable a good way to segment different parts of a very large site? Currently they are just segmented and filtered by title tag but we'd like to get both a specific look at what pages are viewed in segments as well as the overall health of each segment. I've seen where it says that google custom variables can be overwritten, is this going to cause a problem for getting accurate results?
You can use custom variables for segmentation (that's what they are for), but in standard GA there are only five customs vars in three different scopes (page, session, visitor) and variables in different scopes but the same "slot" might interfere with each other. So using custom vars requires more thought and more testing than one would think (especially since you will get results in any case, so you need good tests to separate data from noise).
So you might want to investigate some of the more straightforward options first - if your site is strictly hierachical you might be able to use the url scheme, or something like that, this should do anything that can be achieved by page scope custom vars.
If you want to segment by user behaviour during a visit, or multiple recurring visits, you'll have to use session- and visitor scope custom variables. If at all possible do not re-use different "slots" (custom vars are numbered from 1 to 5) in different scopes.
Preferences objects present a way to store arbitrary data into Rally which can be combined with other Rally information.
For example, if I want to calculate defect density and see a graph in Rally, I can't because I don't have KLOC information in Rally. But if I write a script that periodically drops my current line count every iteration or so into a preference object of a well know ID, I can do this easily.
But should I? And if so what are the limitations of preferences objects in Rally? How much data can I safely store in them, and how many preferences objects can the system reasonable handle? Is it hundreds, thousands, tens of thousands? Our instance already has thousands of these just from standard apps that are installed, so it looks like the answer is at least thousands.
We currently do not place any restrictions on the use of preferences and frankly, I don't think we know the limits if its use. For the load that you are suggesting, I suspect you will not exceed those limits.
On another front, I'd love to hear more about the analysis that you have in mind. Before coming to Rally, I have done a bit of work using LOC to normalize metrics as well as to heuristically determine artifact dependency. Now at Rally, I have both the analytics features as well as the connector features within my domain of responsibility as a Product Owner and I've been exploring ways to responsibly use LOC at Rally.