I am using angularjs, breeze, mssql2012.
I have several queries that takes about 40 seconds to 1 minute to complete running. It is a search function that goes through several tables with records above 900K with several joins.
It shows up with a few errors:
500 (Internal Server Error)
[Error] Error retrieving dataThe wait
operation timed out Error: The wait operation timed out
I am not sure whether the error is with breeze or angular, but I'd like to make the wait time longer than a minute. The query does work on the server.
I've tried using the $timeout from angular, but it doesn't seem to work.
getSearch().then(function () {
common.$timeout(function () {
toggleSearchSpinner();
}, 1250);
});
Not quite sure how to use the timeout function.
I do have $timeout defined in the common module:
commonModule.factory('common',
['$q', '$rootScope', '$timeout', 'commonConfig', 'logger', common]);
function common($q, $rootScope, $timeout, commonConfig, logger) {
var throttles = {};
var service = {
// common angular dependencies
$broadcast: $broadcast,
$q: $q,
$timeout: $timeout,
};
return service;
}
I have multiple places in the application where queries might just take a long time and unfortunately it is unavoidable. It would be great if there was a one time setting that prolongs the timeout wait time.... Is there?
Related
We have a .Net Core API accessing Azure SQL (Gen5, 4 vCores)
Since quite some time,
the API keeps throwing below exception for a specific READ operation
Microsoft.Data.SqlClient.SqlException (0x80131904): Execution Timeout
Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
The READ operation has code to read rows of data and convert an XML column into a specific output format.
Most of the read operation extracts hardly 4-5 rows # a time.
The tables involved in the query have ~ 500,000 rows
We are clueless on Root Cause of this issue.
Any hints on where to start looking # for root cause?
Any pointer would be highly appreciated.
NOTE : Connection string has following settings, apart from others
MultipleActiveResultSets=True;Connection Timeout=60
Overall code looks something like this.
HINT: The above timeout exception comes # ConvertHistory, when the 2nd table is being read.
HttpGet]
public async Task<IEnumerable<SalesOrders>> GetNewSalesOrders()
{
var SalesOrders = await _db.SalesOrders.Where(o => o.IsImported == false).OrderBy(o => o.ID).ToListAsync();
var orders = new List<SalesOrder>();
foreach (var so in SalesOrders)
{
var order = ConvertSalesOrder(so);
orders.Add(order);
}
return orders;
}
private SalesOrder ConvertSalesOrder(SalesOrder o)
{
var newOrder = new SalesOrder();
var oXml = o.XMLContent.LoadFromXMLString<SalesOrder>();
...
newOrder.BusinessUnit = oXml.BusinessUnit;
var history = ConvertHistory(o.ID);
newOrder.history = history;
return newOrder;
}
private SalesOrderHistory[] ConvertHistory(string id)
{
var history = _db.OrderHistory.Where(o => o.ID == id);
...
}
Microsoft.Data.SqlClient.SqlException (0x80131904): Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
From Microsoft Document,
You will get this error in both conditions Connection timeout or Query or command timeout. first identify it from call stack of the error messages.
If you found it as a connection issue you can either Increase connection timeout parameter. if you are still getting same error, it is causing due to a network issue.
from information that you provided It is Query or command timeout error to work around this error you can set CommandTimeout for query or command
command.CommandTimeout = 10;
The default timeout value is 30 seconds, the query will continue to run until it is finished if the time-out value is set to 0 (no time limit).
For more information refer Troubleshoot query time-out errors provided by Microsoft.
I have a common rest controller:
private final KafkaReceiver<String, Domain> receiver;
#GetMapping(produces = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Flux<Domain> produceFluxMessages() {
return receiver.receive().map(ConsumerRecord::value)
.timeout(Duration.ofSeconds(2));
}
What I am trying to achieve is to collect messages from Kafka topic for a certain period of time, and then just stop consuming and consider this flux completed. If I remove timeout and open this in a browser, I am getting messages forever, downloading never stops. And with this timeout consuming stops after 2 seconds, but I'm getting an exception:
java.util.concurrent.TimeoutException: Did not observe any item or terminal signal within 2000ms in 'map' (and no fallback has been configured)
Is there a way to successfully complete Flux after timeout?
There's multiple overloads of the timeout() method - you're using the standard one that throws an exception on timeout.
Instead, just use the overloaded timeout method to provide an empty default publisher to fallback to:
timeout(Duration.ofSeconds(2), Mono.empty())
(Note in a general case you could explicitly capture the TimeoutException and fallback to an empty publisher using onErrorResume(TimeoutException.class, e -> Mono.empty()), but that's much less preferable to using the above option where possible.)
I need to get the data from my pubsub message and insert into bigquery.
What I have:
const topicName = "-----topic-name-----";
const data = JSON.stringify({ foo: "bar" });
// Imports the Google Cloud client library
const { PubSub } = require("#google-cloud/pubsub");
// Creates a client; cache this for further use
const pubSubClient = new PubSub();
async function publishMessageWithCustomAttributes() {
// Publishes the message as a string, e.g. "Hello, world!" or JSON.stringify(someObject)
const dataBuffer = Buffer.from(data);
// Add two custom attributes, origin and username, to the message
const customAttributes = {
origin: "nodejs-sample",
username: "gcp",
};
const messageId = await pubSubClient
.topic(topicName)
.publish(dataBuffer, customAttributes);
console.log(`Message ${messageId} published.`);
}
publishMessageWithCustomAttributes().catch(console.error);
I need to get the data/attributes from this message and query in BigQuery, anyone can help me?
Thaks in advance!
In fact, there is 2 solutions to consume the messages: either a message per message, or in bulk.
Firstly, before going in detail, and because you will perform BigQuery calls (or Facebook API calls), you will spend a lot of the processing time to wait the API response.
Message per Message
If you have an acceptable volume of message, you can perform a message per message processing. You have 2 solutions here:
You can handle each message with Cloud Functions. Set the minimal amount of memory to the functions (128Mb) to limit the CPU cost and thus the global cost. Indeed, because you will wait a lot, don't spend expensive CPU cost to do nothing! Ok, you will process slowly the data when they will be there but, it's a tradeoff.
Create Cloud Function on the topic, or a Push Subscription to call a HTTP triggered Cloud Functions
You can also handle request concurrently with Cloud Run. Cloud Run can handle up to 250 requests concurrently (in preview), and because you will wait a lot, it's perfectly suitable. If you need more CPU and memory, you can increase these value to 4CPU and 8Gb of memory. It's my preferred solution.
Bulk processing is possible if you are able to easily manage multi-cpu multi-(light)thread development. It's easy in Go. Concurrency in Node is also easy (await/async) but I don't know if it's multi-cpu capable or only single-cpu. Anyway, the principle is the following
Create a pull subscription on PubSub topic
Create a Cloud Run (better for multi-cpu, but also work with App Engine or Cloud Functions) that will listen the pull subscription for a while (let's say 10 minutes)
For each message pulled, an async process is performed: get the data/attribute, make the call to BigQuery, ack the message
After the timeout of the pull connexion, close the message listening, finish the current message processing and exit gracefully (return 200 HTTP code)
Create a Cloud Scheduler that call every 10 minutes the Cloud Run service. Set the timeout to 15 minutes and discard retries.
Deploy the Cloud Run service with a timeout of 15 minutes.
This solution offers a better message throughput processing (you can process more than 250 message per Cloud Run service), but don't have a real advantage because you are limited by the API call latency.
EDIT 1
Code sample
// For pubsunb triggered function
exports.logMessageTopic = (message, context) => {
console.log("Message Content")
console.log(Buffer.from(message.data, 'base64').toString())
console.log("Attribute list")
for (let key in message.attributes) {
console.log(key + " -> " + message.attributes[key]);
};
};
// For push subscription
exports.logMessagePush = (req, res) => {
console.log("Message Content")
console.log(Buffer.from(req.body.message.data, 'base64').toString())
console.log("Attribute list")
for (let key in req.body.message.attributes) {
console.log(key + " -> " + req.body.message.attributes[key]);
};
};
I'm investigating some performance problems in an experimental scheduling application I'm working on. I found that calls to session.SaveChanges() were pretty slow, so I wrote a simple test.
Can you explain why the first iteration of the loop takes 200ms and subsequent loop 1-2 ms? How I can I leverage this in my application (I don't mind the first call to be this slow if all subsequent calls are quick)?
private void StoreDtos()
{
for (int i = 0; i < 3; i++)
{
StoreNewSchedule();
}
}
private void StoreNewSchedule()
{
var sw = Stopwatch.StartNew();
using (var session = DocumentStore.OpenSession())
{
session.Store(NewSchedule());
session.SaveChanges();
}
Console.WriteLine("Persisting schedule took {0} ms.",
sw.ElapsedMilliseconds);
}
Output is:
Persisting schedule took 189 ms. // first time
Persisting schedule took 2 ms. // second time
Persisting schedule took 1 ms. // ... etc
Above is for an in-memory database. Using a http connection to a Raven DB instance (on the same machine), I get similar results. The first call takes noticeably more time:
Persisting schedule took 1116 ms.
Persisting schedule took 37 ms.
Persisting schedule took 14 ms.
On Github: RavenDB 2.0 testcode and RavenDB 2.5 testcode.
The very first time that you call RavenDB, there are several things that have to happen.
We need to prepare the serializers for your entities, which takes time.
We need to create the TCP connection to the server.
On the next calls, we can reuse the connection that is already open and the created serializers.
I have a simple Selenium test that runs against a remote Selenium Server instance.
I'm trying to test for page performance, and some pages can exceed the max execution time, and I'm trying to catch that.
No matter what I put in setTimeout(), it always waits for the full page to load or the server times out.
public static $browsers = array(
array(
'name' => 'Firefox on Ubuntu',
'browser' => '*firefox',
'host' => 'dev-ubuntudesktop',
'port' => 4444,
'timeout' => '1000',
),
)
public function testSlowPage() {
$this->setTimeout(1000);
$this->open('myslowaddress');
$this->assertTextNotPresent('Internal Server Error');
}
Even though I'm not using openAndWait, the above example doesn't reach the assert line until after the page is loaded or the web server terminates the request.
What I'd really like is a test that confirms "Page loads in under 1 second", without waiting 30 seconds (or whatever the PHP timeout happens to be set to).
Open method implicitly invokes wait, whether you want it to or not. This wait defaults to 30sec. And setTimeOut is used when your page does not load with in 30 sec. Hence if your page does not load in 30 second then you could use setTimeOut else you tests would fail, so it would be -
selenium.Open(appURL);
selenium.setTimeOt(timeOut);
Now coming to your test objective. You could do assertion to check how long page takes to load, I have checked with 1 sec here -
(It's in java but you should be able to find PHP equivalent)
int testStartTime = Calendar.getInstance().get(13);
selenium.open(appURL);
int testEndTime = Calendar.getInstance().get(13);
Assert.assertTrue((testEndTime-testStartTime)>1, "Fail");
Notice that I have not used setTimeOut method here, hence if page does not load with in 30sec then test would fail any way and you would know that page does not load with in 30 sec. In this situation if you yet want to check page load time then can use exception handling along with time calculation to find page load time.