What do these questions mean and how do I approach them? [closed] - testing

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
I am currently making documentations in regards to my finished product, however I do not understand what the question wants by asking for:
Qualitative assessment of performance
Quantitative assessment of performance

A qualitative assessment of performance is a assessment which doesn't use specific measurements but compares the performance with the expectation or needs of the user. So something like:
The performance of the import is low, but acceptable for the intended use.
reaction of the application to user input most cases so fast, that no waiting time is perceived.
A quantitative assessment is based on measurements:
The import processes 1 million records per hour
98% of all user interactions are processed withing 0.2 seconds
Also more detailed information like standard deviations or plotting a measure with regards to some variable, would be a quantitative assessment.
Note that both assessments are important. The quantitative is great for comparisons, for example if you want to compare the performance of two versions of an application.
The qualitative is what really matters. In the end it often doesn't matter how many millions or records you process per ms. The question is: Is the user satisfied, and in most cases the user doesn't base their feelings on some measurement, but on ... well ... their feelings.

Related

Is there a scientific field dedicated to the quantification of intelligent behavior? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 10 months ago.
Improve this question
One of the biggest struggle with ML research is the creation of objective functions which capture the researcher's goals. Especially when talking about generalizable AI, the definition of the objective function is very tricky.
This excellent paper for instance attempts to define an objective function to reward an agent's curiosity.
If we could measure intelligent behavior well, it would perhaps be possible to perform an optimization in which the parameters of a simulation such as a cellular automaton are optimized to maximize the emergence of increasingly intelligent behavior.
I vaguely remember having come across a group of cross-discipline researchers who were attempting to use the information theory concept of entropy to measure intelligent behavior but cannot find any resources about it now. So is there a scientific field dedicated to the quantification of intelligent behavior?
The field is called Integrated Information Theory, initially proposed by Giulio Tononi. It attempts to quantify consciousness of systems by formally defining formally the phenomenological experience of consciousness, and computing a value Phi, meant for a proxy of "consciousness".

Advice on how to structure LSTM input Data [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've built model to predict the price of a particular stock. I have all the hourly candle data for this stock for the last three years, as well as additional features.
Right now, the input vector shape is [206,72,9]. The 72 being three days, and the 9 being the number of features.
My first question is, is there an optimal amount of candles to pass in for the second dimension? Would [618,24,9] potentially improve the results?
My second question is, right now the data [1,2,3,4,5,6] is passed in as [1,2,3],[4,5,6], which contains no overlapping hours. Would changing this to [1,2,3],[2,3,4],[3,4,5],[4,5,6] also potentially improve the results?
Let me try to answer both your questions concurrently.
It is possible that more data (both in terms of greater time steps and overlapping series might improve the results - however there are situations where too much data can also be a detriment to your forecasts.
One of the disadvantages of using LSTM models for time series forecasting is that they tend to carry forward too much volatility from previous time steps into the subsequent forecasts - which can make this model an unsuitable candidate for analyzing trend data - they are best used for time series that are highly volatile. Therefore - in answering your question - it is possible that too much data could be as bad as not having enough data - it all depends on the time series under analysis.
In this regard, you should consider the price trend of your stock. If it is a stock that is highly volatile, e.g. a small-cap stock, then an LSTM model might work well. However, if it is a large-cap stock, or one that has a clear trend in the data over time, then LSTM might prove unsuitable.
You might find the following article regarding the use of LSTM to forecast oil prices of use - it is evident that with a strong trend in the data, LSTM proves too volatile to forecast effectively.
Question 1: The optimal amount is like any model hyperparameter, you need to find it yourself. Each model and each data is different, and it's impossible to have a ready answer.
But in general:
Too short: not enough data, won't learn
Too long: may be too much processing for very little gain (or even loss)
Question 2: Yes, you'd get improvement from using the sliding windows, because you have more data for a better generalization. (Unless your original dataset was already so long that it was good enough)

Basic benchmark for oltp databases [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Im looking for benchmarks that tested different RDBMS running on the same enviroment to use as reference for a project. Im not looking for any test in particular just want a source of comparison for a few RDBMS something like Techempowers benchmarks for development frameworks. Does anyone know where I can find this? It would be really helpful for me. Thanks in advance.
TPC-C at http://www.tpc.org, is a Benchmarks used to simulates a complete computing environment where a multi-users executes transactions against a database.
The benchmark is centered around the transactions of an order-entry environment.
These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses.
TPC-C involves a mix of five concurrent transactions of different types and complexity either executed on-line or queued for deferred execution.
The simultaneous execution of multiple transaction types that span a breadth of complexity
On-line and deferred transaction execution modes
Significant disk input/output
-Transaction integrity (ACID properties)
Non-uniform distribution of data access through primary and secondary keys
Databases consisting of many tables with a wide variety of sizes, attributes, and relationships Contention on data access and update
It's used for selecting best hardware /database/Price Performance
you can find TPC-c for many database engines runing in different environment at:
The TPC defines transaction processing and database benchmarks and delivers trusted results to the industr
You can sort / concentrate in environment you need.
Concentrate on two numbers in table (based on database and hardware/o.s)
(tpmC) : absolute number , as increase as best performance
Price/tpmC : relative cost per dollar (just relative)

Get estimated execution time during design phase [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am going to develop some business logic. During requirement phase, I need to provide SLA.
Business always want everything in less than second. Some time it is very frustatung also.They are not bothered about complexity.
My DB is SQL server 2012 and transaction DB.
Is there any formula which will take number of tables, columns etc and provide estimate?
No, you won't be able to get an execution time. Not only do the # of tables/joins factor in, but how much data is returned, network speed, load on the server, and many other factors. What you can get is a Query plan. SQL Server generates a query plan for every query it executes. And the execution plan will give you a "cost" value that you can use as a VERY general guideline about the query's performance. Check out these two links for more information...
1) this is a good StackOverflow question/answer.
2) sys.dm_exec_query_plan

How big the database should be in order to take 30 minutes to query [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Improve this question
We have few normalized tables each containing 2,5 million rows in average. Then there is a select query with joins. It takes more than 30 minutes to execute. The db server runs on a machine with 9GBs of RAM and quad core Xeon processor.
So, since I've never worked with big data, I'm trying to understand if it is the bad query problem or is it a hardware problem? Any information is appreciated
In my experience, a 30 minute query is not strictly a result of DB size.
There are many variables in such a situation depending on what you are considering your query time. Are you referring to execution time perceived at the user end (eg: web-page request or application response time)? Or are you referring to the raw query as executed on the database directly (through a DB manager or the command line)?
If you are indeed referring to the execution time of the raw query directly on the database, my next step to determine bottlenecks would be to use the SQL EXPLAIN modifier, or an application like HeidiSQL to benchmark the query and get a breakdown of the query components.
My guess is you are not correctly utilizing indexes and the DB has to create temporary indices and tables and execute against these. That would be my knee-jerk assumption.
Our truncated development database runs complex queries against tables ranging from 1-3 million rows (it contains a small subset of our production database and still clocks in at 16 gigs), and while we do sometimes hit ~15 minutes, those are huge queries.
Nothing to do with hardware before having confidence in your software.
Post your query with EXPLAIN PLAN details.