When I manually open an .ics file that I have generated (by double clicking on it from the Desktop), it all seems to work, with the exception of the reminder/alarm. For some reason, this always opens with the default state where Reminder is set to none.
Could you let me know if this is something to do with my code, or something that Outlook (and possibly other software) does?
BEGIN:VCALENDAR
PRODID:-//Company name//Product name//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:-0000
DTSTART:16010101T020000
RRULE:FREQ=YEARLY;WKST=MO;INTERVAL=1;BYMONTH=10;BYDAY=-1SU
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0000
TZOFFSETTO:+0100
DTSTART:16010101T010000
RRULE:FREQ=YEARLY;WKST=MO;INTERVAL=1;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20161008T144102Z
UID:4989C88E4BD54DFF82864D58CBFF12A6AD68ACD9BF3344AA84FEC7683C4DA
DTSTAMP:20160925T093000Z
DTSTART;TZID=Europe/London:20160925T093000
DTEND;TZID=Europe/London:20160925T210000
SUMMARY:Here is a summary.
DESCRIPTION:Here is a description.
TRANSP:OPAQUE
BEGIN:VALARM
TRIGGER:-PT1440M
ACTION:DISPLAY
DESCRIPTION:Reminder
END:VALARM
END:VEVENT
END:VCALENDAR
Dont know when you last tried the above event but the DTSTART/DTEND are both in the past. As a consequence, I suspect that the client is just ignoring an alarm that it can no longer trigger.
+ You might want to express the TRIGGER in terms of hours (-PT24H).
Related
Suppose I get a mail from "Lastname, Firstname" Content: "Hi, we wait for your answer" and i want to reply to it.
Is there a way that as soon as i hit the Reply button, instead of
From: "Lastname, Firstname" firstname.lastname#mail.com
Sent: Monday, 14. November 2017 12:23
To: sample#mailc.com
Subject: Draft
Hi, we wait for you
I would get instead a draft, that i could edit.
Hi firstname,
thank you for your message!
Kind regards
From: me#mail.com
Sent: Monday, 14. November 2017 12:23
To: sample#mailc.com
Subject: Draft
Hi, we wait for you
From version Excel 2010 onwards you can create a boilerplate template with custom text incorporating many of your desired features using Quick Steps.
In case this link dies here are the edited screenshot highlights:
I've created an experiment in psychopy builder in which participants must vocally name pictures presented onscreen (for example, if a picture of a chair appears, the participant has to respond by saying "chair"). I've set up a code component to detect each vocal response, which ends the trial and initiates the next one. This part of the experiment works well, however I'm having trouble integrating EEG recording.
Some important information:
My trial loop reads images and triggerVal's out of a .csv file. I have an image component (called english_naming) that displays images for participants to name out-loud. The component's STOP field is defined as $vpvk.event_onset - this forces the trial to end and the next one to begin upon detection of a vocal response.
So, here is my (working) code component at present:
Begin Experiment:
from psychopy import parallel
port = parallel.port(address=61432)
Begin Routine
vpvk = vk.onsetVoiceKey(
sec=10) # creates the voice key
vpvk.start() #starts recording.
port.setData(triggerVal) # tells psychopy to read trigger values from the .csv file
End Routine
vpvk.stop() # ends the recording
port.setData(0) # resets the trigger value to 0 for the start of the next trial
My problem is this
At present, parallel port events are time-locked to the start of each trial, but I need them to be time-locked to participant's vocal responses. I tried inserting if vpvk.event_onset(): above port.setData(triggerVal), but this fails to generate any trigger codes at all. I've also tried if english_naming==FINISHED but the same problem occurred. I've tried a bunch of variants on these two lines of code, but nothing I can think of seems to work.
I would really really appreciate any advice on this problem. Thanks in advance!
2 months ago I added custom tracking events with whitespaces in my app, such as:
[PFAnalytics trackEvent:#"Share Button Click" dimensions:dimensions];
However, it seems like Parse is no longer allowing whitespaces, which in turn has caused my code to stop registering analytics. Does anyone know a fix around this?
Here's the error:
Error: invalid event name: cannot contain whitespace (Code: 160, Version: 1.4.2)
2014-11-01 runEventually command failed. Error:Error Domain=Parse Code=160 "The operation couldn’t be completed. (Parse error 160.)" {code=160, error=invalid event name: cannot contain whitespace}
I'm curious as to why Parse has done this change and if there's anything we can do so that I'm not forced to create a new event.
I'm having a problem with Sitecore/Lucene on our Content Management environment, we have two Content Delivery environments where this isn't a problem. I'm using the Advanced Database Crawler to index a number of items of defined templates. The index is pointing to the master database.
The index will remain 'stable' for a few hours or so, and then in the logs I will start to see this error appearing. Along with if I try and open a Searcher.
ManagedPoolThread #17 16:18:47 ERROR Could not update index entry. Action: 'Saved', Item: '{9D5C2EAC-AAA0-43E1-9F8D-885B16451D1A}'
Exception: System.IO.FileNotFoundException
Message: Could not find file 'C:\website\www\data\indexes\__customSearch\_f7.cfs'.
Source: Lucene.Net
at Lucene.Net.Index.SegmentInfos.FindSegmentsFile.Run()
at Sitecore.Search.Index.CreateReader()
at Sitecore.Search.Index.CreateSearcher(Boolean close)
at Sitecore.Search.IndexSearchContext.Initialize(ILuceneIndex index, Boolean close)
at Sitecore.Search.IndexDeleteContext..ctor(ILuceneIndex index)
at Sitecore.Search.Crawlers.DatabaseCrawler.DeleteItem(Item item)
at Sitecore.Search.Crawlers.DatabaseCrawler.UpdateItem(Item item)
at System.EventHandler.Invoke(Object sender, EventArgs e)
at Sitecore.Data.Managers.IndexingProvider.UpdateItem(HistoryEntry entry, Database database)
at Sitecore.Data.Managers.IndexingProvider.UpdateIndex(HistoryEntry entry, Database database)
From what I read this can be due to an update on the index whilst there is an open reader, and when a merge operation happens the reader will still have a reference to the deleted segment, or something to that avail (I'm not an expert on Lucene).
I have tried a few things with no success. Including sub classing the Sitecore.Search.Index object and overriding CreateWriter(bool recreate) to change the merge scheduler/policy and tweaking the merge factor. See below.
protected override IndexWriter CreateWriter(bool recreate)
{
IndexWriter writer = base.CreateWriter(recreate);
LogByteSizeMergePolicy policy = new LogByteSizeMergePolicy();
policy.SetMergeFactor(20);
policy.SetMaxMergeMB(10);
writer.SetMergePolicy(policy);
writer.SetMergeScheduler(new SerialMergeScheduler());
return writer;
}
When I'm reading the index I call SearchManager.GetIndex(Index).CreateSearchContext().Searcher and when I'm done getting the documents I need I call .Close() which I thought would've been sufficient.
I was thinking I could perhaps try overriding CreateSearcher(bool close) as well, to ensure I'm opening a new reader each time, which I will give a go after this. I don't really know enough about how Sitecore handles Lucene, its readers/writers?
I also tried playing around with the UpdateInterval value in the web config to see if that would help, alas it didn't.
I would greatly appreciate anyone who a) knows of any kind of situations in which this could occur, and b) any potential advice/solutions, as I'm starting to bang my head against a rather large wall :)
We're running Sitecore 6.5 rev111123 with Lucene 2.3.
Thanks,
James.
It seems like Lucene freaks out when you try to re-index something that is in the process of being indexed already. To verify that, try the following:
Set the updateinterval of your index to a really high value (8 hours).
Then, stop the w3wp.exe and delete the index.
After deleting the index try to rebuild the index in Sitecore and wait for this to finish.
Test again and see if this occurs.
If this doesn't occur anymore it will be the updateinterval set too low which causes your index (that is probably still being constructed) to be overwritten with a new one (that won't be finished either) causing your segments.gen file to contain the wrong index information.
This .gen file will point your indexreader to what segments are part of your index and is recreated after index rebuilding.
That's why I suggest to try to disable the updates for a large amount of time and to rebuild it manually.
I've got some code to parse an XML file like this:
[doc := (XML.XMLParser new) parse: aFilename asURI] on: XML.SAXParseException
do: [:ex | MyCustomError raiseSignal: ex description].
I now want to handle the MyCustomError higher in the stack, by moving the XML file to a folder named 'Failed', but I get a sharing violation error because the parser has not had the opportunity to close the file.
If I alter my code like this it works, but I wonder if there is a better way:
[doc := (XML.XMLParser new) parse: aFilename asURI] on: XML.SAXParseException
do: [:ex | description := ex description].
description ifNotNil: [MyCustomError raiseSignal: description].
Code can signal an exception for errors which are resumable (non-fatal); if you trap such an error you can't be certain that the XMLParser isn't intending to keep on going. For example, code that doesn't know whether it's being called in interactive or batch mode might signal an exception for a simple informational message; the caller would know whether to handle it in an interactive way (say with a message prompt) or a batch way (writing a message to a log file).
In order for this to work the pieces of code that are communicating in this way have to know what sort of an error it is they're dealing with. (This would typically be done with a severity level, encoded either by state in the exception object or by raising a different class of exception.) If you inspect the ex object you might be able to see this information.
In any case, the evidence suggests that XMLParser is treating SAXParseException as a resumable error (otherwise, it should clean up after itself). That being so, your "fix" seems appropriate enough.
you can also run the parser on a ReadStream instead of a URL. Then you can wrap your code in an ensure block where you close the readStream.