I've been trying to monitor when USB devices are inserted or removed and it seems to be working pretty well. The only thing that troubling me now is that the event is fired multiple times each time I plug a device in or remove it.
I can group the events with no problem but I am curious why it is happening in the first place.
This is the query I'm using
SELECT * FROM Win32_DeviceChangeEvent WHERE EventType = 2 or EventType = 3
Which fires when a device is inserted or removed. And the following modified version...
SELECT * FROM Win32_DeviceChangeEvent WHERE EventType = 2 or EventType = 3 GROUP WITHIN 1
Groups the events over a 1 second interval. Can someone explain why the events are triggered multiple times?
For completeness here is the rest of the code.
static void Main(string[] args)
{
var watcher = new ManagementEventWatcher();
var query = new WqlEventQuery("SELECT * FROM Win32_DeviceChangeEvent WHERE EventType = 2 or EventType = 3 GROUP WITHIN 1 ");
watcher.EventArrived += new EventArrivedEventHandler(watcher_EventArrived);
watcher.Query = query;
watcher.Start();
Console.WriteLine("Press a key to exit.");
Console.ReadKey();
}
static void watcher_EventArrived(object sender, EventArrivedEventArgs e)
{
Console.WriteLine(string.Format("--> {0}", e.NewEvent.GetType().Name));
Console.WriteLine(string.Format(" {0}", e.NewEvent.ClassPath.ClassName));
Console.WriteLine(string.Format(" Properties [{0}]", e.NewEvent.Properties.Count));
foreach (var prop in e.NewEvent.Properties)
{
Console.WriteLine(string.Format(" Name: {0} Origin: {1} Type: {2} = {3}", prop.Name, prop.Origin, prop.Type.ToString(),prop.Value==null?"{null}":prop.Value.ToString()));
}
}
This might occur if your usb disk has two virtual disks or another configuration of usb that activates many changing events.
Related
Chronicle Map Versions I used - 3.22ea5 / 3.21.86
I am trying to use ChronicleMap as an LRU cache.
I have two ChronicleMaps both equal in configuration with allowSegmentTiering set as false. Consider one as main and the other as backup.
So, when the main Map gets full, few entries will be removed from the main Map and in parallel the backup Map will be used. Once the entries are removed from main Map, the entries from the backup Map will be refilled in the Main Map.
Shown below a sample code.
ChronicleMap<ByteBuffer, ByteBuffer> main = ChronicleMapBuilder.of(ByteBuffer.class, ByteBuffer.class).name("main")
.entries(61500)
.averageKey(ByteBuffer.wrap(new byte[500]))
.averageValue(ByteBuffer.wrap(new byte[5120]))
.allowSegmentTiering(false)
.create();
ChronicleMap<ByteBuffer, ByteBuffer> backup = ChronicleMapBuilder.of(ByteBuffer.class, ByteBuffer.class).name("backup")
.entries(100)
.averageKey(ByteBuffer.wrap(new byte[500]))
.averageValue(ByteBuffer.wrap(new byte[5120]))
.allowSegmentTiering(false)
.create();
System.out.println("Main Heap Size -> "+main.offHeapMemoryUsed());
SecureRandom random = new SecureRandom();
while (true)
{
System.out.println();
AtomicInteger entriesAdded = new AtomicInteger(0);
try
{
int mainEntries = main.size();
while /*(true) Loop until error is thrown */(mainEntries < 61500)
{
try
{
byte[] keyN = new byte[500];
byte[] valueN = new byte[5120];
random.nextBytes(keyN);
random.nextBytes(valueN);
main.put(ByteBuffer.wrap(keyN), ByteBuffer.wrap(valueN));
mainEntries++;
}
catch (Throwable t)
{
System.out.println("Max Entries is not yet reached!!!");
break;
}
}
System.out.println("Main Entries -> "+main.size());
for (int i = 0; i < 10; i++)
{
byte[] keyN = new byte[500];
byte[] valueN = new byte[5120];
random.nextBytes(keyN);
random.nextBytes(valueN);
backup.put(ByteBuffer.wrap(keyN), ByteBuffer.wrap(valueN));
}
AtomicInteger removed = new AtomicInteger(0);
AtomicInteger i = new AtomicInteger(Math.max( (backup.size() * 5), ( (main.size() * 5) / 100 ) ));
main.forEachEntry(c -> {
if (i.get() > 0)
{
c.context().remove(c);
i.decrementAndGet();
removed.incrementAndGet();
}
});
System.out.println("Removed "+removed.get()+" Entries from Main");
backup.forEachEntry(b -> {
ByteBuffer key = b.key().get();
ByteBuffer value = b.value().get();
b.context().remove(b);
main.put(key, value);
entriesAdded.incrementAndGet();
});
if(backup.size() > 0)
{
System.out.println("It will never be logged");
backup.clear();
}
}
catch (Throwable t)
{
// System.out.println();
// t.printStackTrace(System.out);
System.out.println();
System.out.println("-------------------------Failed----------------------------");
System.out.println("Added "+entriesAdded.get()+" Entries in Main | Lost "+(backup.size() + 1)+" Entries in backup");
backup.clear();
break;
}
}
main.close();
backup.close();
The above code yields the following result.
Main Entries -> 61500
Removed 3075 Entries from Main
Main Entries -> 61500
Removed 3075 Entries from Main
Main Entries -> 61500
Removed 3075 Entries from Main
Max Entries is not yet reached!!!
Main Entries -> 59125
Removed 2956 Entries from Main
Max Entries is not yet reached!!!
Main Entries -> 56227
Removed 2811 Entries from Main
Max Entries is not yet reached!!!
Main Entries -> 53470
Removed 2673 Entries from Main
-------------------------Failed----------------------------
Added 7 Entries in Main | Lost 3 Entries in backup
In the above result, The Max Entries of the Main map got decreased in the subsequent iterations and the refilling from the backup Map also got failed.
In the Issue 128, it was said the entries are deleted properly.
Then why the above sample code fails? What am I doing wrong in here? Is the Chronicle Map not designed for such usage pattern?
Even If I use one Map only, the max Entries the Map can hold gets reduced after each removal of entries.
I have a very strange error with dapper:
there is already an open DataReader associated with this Command
which must be closed first
But I don't use DataReader! I just call select query on my server application and take first result:
//How I run query:
public static T SelectVersion(IDbTransaction transaction = null)
{
return DbHelper.DataBase.Connection.Query<T>("SELECT * FROM [VersionLog] WHERE [Version] = (SELECT MAX([Version]) FROM [VersionLog])", null, transaction, commandTimeout: DbHelper.CommandTimeout).FirstOrDefault();
}
//And how I call this method:
public Response Upload(CommitRequest message) //It is calling on server from client
{
//Prepearing data from CommitRequest
using (var tr = DbHelper.DataBase.Connection.BeginTransaction(IsolationLevel.Serializable))
{
int v = SelectQueries<VersionLog>.SelectVersion(tr) != null ? SelectQueries<VersionLog>.SelectVersion(tr).Version : 0; //Call my query here
int newVersion = v + 1; //update version
//Saving changes from CommitRequest to db
//Updated version saving to base too, maybe it is problem?
return new Response
{
Message = String.Empty,
ServerBaseVersion = versionLog.Version,
};
}
}
}
And most sadly that this exception appearing in random time, I think what problem in concurrent access to server from two clients.
Please help.
This some times happens if the model and database schema are not matching and an exception is being raised inside Dapper.
If you really want to get into this, best way is to include dapper source in your project and debug.
i'm trying to manipulate somme specific things with google events, i can add, delete, get all events and put colors on events, But i Have a problem , each time i ty to insert a recurring event , i get not valid timezone message and i don't know how to fix it :/
This my code :
public void AddRecurringEvents(Calendar service, Event createdEvent,
String rule, String Summary, String Location) {
Event event = new Event();
// Define Date Time for each start and end time.
DateTime start = DateTime.parseRfc3339("2014-09-30T10:00:00Z");
DateTime end = DateTime.parseRfc3339("2014-09-30T10:25:00Z");
event.setStart(new EventDateTime().setDateTime(start).setTimeZone(
"Europe/Paris"));
event.setEnd(new EventDateTime().setDateTime(end).setTimeZone(
"Europe/Paris"));
// Setting recurrence
event.setRecurrence(Arrays.asList(rule));
try {
Event recurringEvent = service.events().insert("primary", event)
.execute();
System.out.println(createdEvent.getId());
} catch (IOException e) {
e.printStackTrace();
}
}
Any ideas how to fix this problem ?? is there something wrong with my code ...
THanks
Try something like this:
Event event = new Event();
event.setSummary(en.name);
event.setDescription(en.desc);
event.setStatus("confirmed");
ArrayList<String> recurrence = new ArrayList<String>();
recurrence.add("RRULE:FREQ=YEARLY;WKST=MO;");
event.setRecurrence(recurrence);
java.util.Calendar cal = java.util.Calendar.getInstance();
cal.set(en.year, en.month-1, en.day,0,0,0);
Date startDate = cal.getTime();
Date endDate = new Date(startDate.getTime()); // same Time
DateTime start = new DateTime(startDate, TimeZone.getTimeZone("Europe/Paris"));
event.setStart(new EventDateTime().setDateTime(start).setTimeZone("Europe/Paris"));
DateTime end = new DateTime(endDate, TimeZone.getTimeZone("Europe/Paris"));
event.setEnd(new EventDateTime().setDateTime(end).setTimeZone("Europe/Paris"));
Event createdEvent = client.events().insert( "primary", event).execute();
I want to insert 1000000 documents into RavenDB.
class Program
{
private static string serverName;
private static string databaseName;
private static DocumentStore documentstore;
private static IDocumentSession _session;
static void Main(string[] args)
{
Console.WriteLine("Start...");
serverName = ConfigurationManager.AppSettings["ServerName"];
databaseName = ConfigurationManager.AppSettings["Database"];
documentstore = new DocumentStore { Url = serverName };
documentstore.Initialize();
Console.WriteLine("Initial Databse...");
_session = documentstore.OpenSession(databaseName);
for (int i = 0; i < 1000000; i++)
{
var person = new Person()
{
Fname = "Meysam" + i,
Lname = " Savameri" + i,
Bdate = DateTime.Now,
Salary = 6001 + i,
Address = "BITS provides one foreground and three background priority levels that" +
"you can use to prioritize transBfer jobs. Higher priority jobs preempt"+
"lower priority jobs. Jobs at the same priority level share transfer time,"+
"which prevents a large job from blocking small jobs in the transfer"+
"queue. Lower priority jobs do not receive transfer time until all the "+
"higher priority jobs are complete or in an error state. Background"+
"transfers are optimal because BITS uses idle network bandwidth to"+
"transfer the files. BITS increases or decreases the rate at which files "+
"are transferred based on the amount of idle network bandwidth that is"+
"available. If a network application begins to consume more bandwidth,"+
"BITS decreases its transfer rate to preserve the user's interactive"+
"experience. BITS supports multiple foreground jobs and one background"+
"transfer job at the same time.",
Email = "Meysam" + i + "#hotmail.com",
};
_session.Store(person);
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine("Count:" + i);
Console.ForegroundColor = ConsoleColor.White;
}
Console.WriteLine("Commit...");
_session.SaveChanges();
documentstore.Dispose();
_session.Dispose();
Console.WriteLine("Complete...");
Console.ReadLine();
}
}
but session doesn't save changes, I get an error:
An unhandled exception of type 'System.OutOfMemoryException' occurred in mscorlib.dll
A document session is intended to handle a small number of requests. Instead, experiment with inserting in batches of 1024. After that, dispose the session and create a new one. The reason you get an OutOfMemoryException is because the document session caches all constituent objects to provide a unit of work, which is why you should dispose of the session after inserting a batch.
A neat way to do this is with the use of a Batch linq extension:
foreach (var batch in Enumerable.Range(1, 1000000)
.Select(i => new Person { /* set properties */ })
.Batch(1024))
{
using (var session = documentstore.OpenSession())
{
foreach (var person in batch)
{
session.Store(person);
}
session.SaveChanges();
}
}
The implementations of both Enumerable.Range and Batch are lazy and don't keep all the objects in memory.
RavenDB also has a bulk API that does a similar thing without the need for additional LINQ extensions:
using (var bulkInsert = store.BulkInsert())
{
for (int i = 0; i < 1000 * 1000; i++)
{
bulkInsert.Store(new User
{
Name = "Users #" + i
});
}
}
Note .SaveChanges() isn't called and will be called either when a batch size is reached (defined in the BulkInsert() if needed), or when the bulkInsert is disposed of.
I have a few tables in a c# application I'm currently working on and for 4/5 of the tables everything saves perfectly fine no issues. For the 5th table everything seems good until I reload the program again (without modifying the code or working with a seperate install so that the data doesn't go away) The 4/5 tables are fine but the 5th doesn't have any records in it after it has been restarted (but it did the last time it was running). Below is some code excerpts. I have tried a few different solutions online including creating a string to run the sql commands on the database manually and creating the row directly as opposed to the below implementation which uses a generic data row.
//From main window
private void newInvoice_Click(object sender, EventArgs e)
{
PosDatabaseDataSet.InvoicesRow newInvoice = posDatabaseDataSet1.Invoices.NewInvoicesRow();
Invoices iForm = new Invoices(newInvoice, posDatabaseDataSet1, true);
}
//Invoices Table save [Works] (from Invoices.cs)
private void saveInvoice_Click(object sender, EventArgs e)
{
iRecord.Date = Convert.ToDateTime(this.dateField.Text);
iRecord.InvoiceNo = Convert.ToInt32(this.invoiceNumField.Text);
iRecord.Subtotal = (float) Convert.ToDouble(this.subtotalField.Text);
iRecord.Tax1 = (float)Convert.ToDouble(this.hstField.Text);
iRecord.Total = (float)Convert.ToDouble(this.totalField.Text);
iRecord.BillTo = this.billToField.Text;
invoicesBindingSource.EndEdit();
if (newRecord)
{
dSet.Invoices.Rows.Add(iRecord);
invoicesTableAdapter.Adapter.Update(dSet.Invoices);
}
else
{
string connString = Properties.Settings.Default.PosDatabaseConnectionString;
string queryString = "UPDATE dbo.Invoices set ";
queryString += "Date='" + iRecord.Date+"'";
queryString += ", Subtotal=" + iRecord.Subtotal;
queryString += ", Tax1=" + iRecord.Tax1.ToString("N2");
queryString += ", Total=" + iRecord.Total;
queryString += " WHERE InvoiceNo=" + iRecord.InvoiceNo;
using (SqlConnection dbConn = new SqlConnection(connString))
{
SqlCommand command = new SqlCommand(queryString, dbConn);
dbConn.Open();
SqlDataReader r = command.ExecuteReader();
dbConn.Close();
}
}
dSet.Invoices.AcceptChanges();
}
//Invoice Items save [works until restart] (also from Invoices.cs)
private void addLine_Click(object sender, EventArgs e)
{
DataRow iRow = dSet.Tables["InvoiceItems"].NewRow();
iRow["Cost"] = (float)Convert.ToDouble(this.costField.Text);
iRow["Description"] = this.descriptionField.Text;
iRow["InvoiceNo"] = Convert.ToInt32(this.invoiceNumField.Text);
iRow["JobId"] = Convert.ToInt32(this.jobIdField.Text);
iRow["Qty"] = Convert.ToInt32(this.quantityField.Text);
iRow["SalesPerson"] = Convert.ToInt32(this.salesPersonField.Text);
iRow["SKU"] = Convert.ToInt32(this.skuField.Text);
dSet.Tables["InvoiceItems"].Rows.Add(iRow);
invoiceItemsTableAdapter.Adapter.Update(dSet,"InvoiceItems");
PosDatabaseDataSet.InvoiceItemsDataTable dTable = (PosDatabaseDataSet.InvoiceItemsDataTable)dSet.InvoiceItems.Copy();
DataRow[] d = dTable.Select("InvoiceNo=" + invNo.ToString());
invoiceItemsView.DataSource = d;
}
Thanks in advance for any insight.
UPDATE: October 17, 2011. I am still unable to get this working is there any more ideas out there?
you must execute your Sql Command in order to persis the changes you made.
using (SqlConnection dbConn = new SqlConnection(connString))
{
dbConn.Open();
SqlCommand command = new SqlCommand(queryString, dbConn);
command.ExecuteNonQuery();
dbConn.Close();
}
The ExecuteReader method is intended (as the name says) to read the data from a SQL table. You need to use a different method as you can see above.
We need some more info first, you haven't shown the case where your code fails.
Common mistakes on this kind of code is calling DataSet.AcceptChanges() before actually committing the changes to the database.
Second is a conflict between databound data through the binding source vs edits to the dataset directly.
Lets see the appropriate code and we can try and help.
Set a breakpoint after teh call to invoiceItemsTableAdapter and check the InvoiceItems table for the row you have added. Release the breakpoint and then close your app. Check the database again. I would say that another table may be forcibly overwriting the invoice item table.