I'm having this issue:
I'm using VBS to extract all meetings from our conference rooms.
Sometimes I get from the same room 2 meetings that overlap each other for a certain amount of time. But, outlook shows only one of them.
I've tried to check all the item.fields to see what seems to be the criteria by which a meeting is shown in shared calendar, or not, but all seems to be the same for both.
I would have uploaded my code here, but it is very long, about 350 code lines.
So, my question is, what property is used by outlook to show to other people in a shared calendar, a meeting, if it overlaps with another one?
I`ve found the answer.
Microsoft uses this model for recurrent appointments:
When a recurring event is created, the item.isRecurring is set to True.
then the colection occurrences is added to item, and reccuring pattern object.
If you delete or modify one or more occurrences, anther object is added to occurences, exception. All deleted, or modified occurrences can be found here.
the strange thing is that even if an occurence is deleted, you can still find it as active, and thus, overlaying with the appointments created afterwards.
The trick is to check all the way, even in exceptions, in order to be able to get same view as in Outlook.
if you need additional details, pm me.
Related
I'm trying to automatically forward meetings from my work email to my personal email whenever I flag a meeting as private. I don't really even know where to start. Any one able to help me out?
I've tried looking for solutions based on the category of the meeting but that hasn't yielded any resulted either
You can create a VBA macro where you could handle changes to items by using the Items.ItemChange or AppointmentItem.PropertyChange events. When appointment items marked as private the following property is set under the hood:
Appointment.Sensitivity = olPrivate
So, you need to track changes made to the AppointmentItem.Sensitivity property which returns or sets a constant in the OlSensitivity enumeration indicating the sensitivity for the Outlook item.
But I'd suggest starting from the Getting started with VBA in Office article to be more familiar with VBA environment.
I am trying to send a modifiable box with a no-mod script inside it to another user and cannot get the hoped for result. What I was hoping for is that the recipient could modify the box but not the script. Sadly that is not what happens.
What happens is that the user receives the box, and upon opening it, it shows the MOD box checked, and even says "You can modify this object". Good. And, indeed, the user can change the texture of the box, save it to inventory, re-rez it and the texture survives unchanged. Still good.
BUT, when the user changes the GROUP the box belongs to, with the box rezzed, they can do it and it seems to work. But as soon as they take the box back into inventory three things happen.
(1) where it used to say, in inventory (no copy) it now says (no copy) (no modify) and
(2) when they rez it again, the GROUP reverts to what it was when I sent out the box, and
(3) curiously, the texture does not revert.
I looked for any further posts here regarding permissions.
I found and read with interest an Oct 14 thread: Modify permissions changed when sending items in a box in the Second Life Community Forum.
Of course I found the overall result of that problem to be basically: "Second Life does that sometimes".
I imagine this is relatively easy for someone else to replicate on some different sim, with
different avatars and different box, on a different day if any of that changes things.
Advice? Links to the best place on-line to read up on permissions?
Links to other discussion forums where I might find assistance such as StackOverflow?
Thank you.
Afterthought #1 -- Yes I did set modify BOTH with the box Rezzed and with it in inventory.
The permissions shown in inventory are the most restrictive permissions on the object and its contents. The object is still modifiable but the inventory listing will show no modify if a script inside the object is no modify.
The group is set to the active group of the person rezzing it (or, in viewers with "rez under land group" enabled, the land group, should the person rezzing it be a member). Groups are not stored when the object is taken back into your inventory; it has nothing to do with the group you set it to.
A client of ours suddenly wants to add a custom column to his Outlook contact list. He wants this column to display the distribution list that said contact is a part of.
Now at a glance this is much more complicated than he makes it out to be. Not every contact is necessarily in AD, and they could certainly be part of more than one list. This all has to be accounted for in the formula.
I'm leaning towards telling him this is beyond our scope of support, but I though I'd ask around first. Is there some pre-made code out there that performs a similar function? Thanks.
You can certainly retrieve that information programmatically (ExchangeUser.GetMemberOfList), but you cannot display that in the Contacts folder view.
for each lst in Application.Session.CurrentUser.AddressEntry.GetExchangeUser.GetMemberOfList
Debug.Print lst.Name
next
I'm building a system that needs to store/manage different types of events. For simplicity, I will focus on designing a calendar (I'm building something slightly different, but calendar is a good analogy and it's easy to reason about). I'd like to hear about possible database/schema design ideas.
Problem Description
I have a calendar with different types of events (for simplicity sake, say there is only 1 type of event: Task). User can add new event for a particular date, edit (change some details, like title or move to another date) or delete. There can be one-time events and recurring events (with different types of recurrence: every X days, every 15th day of the month, every week on Monday; kind of like simple cron). When user moves recurring event, all other instances of this event are moved in the same manner (e.g: +3 days). Important part: recurring events can have exceptions. So, for example, let's say I have an recurring event A which is repeated every 7 days. But I want to change it's date for next week, so instead of Tuesday, it's be assigned to Friday, after that it'll still occur on Tuesday. This "exception" event shouldn't be affected when "parent" event is moved.
Also, every recurring event can have additional info, that is related only to 1 particular instance, e.g: I have the same recurring event A repeated every 7 days, I want to add a note for this week instance that says "X", and I want to add another note for the event A next month that says "Y" - those fields are only visible to that single instances.
Ideas
System with regular, one-time events is pretty straightforward so I won't discuss that and focus only on recurring events.
1. One possible solution is the one that resembles OOP: I can have an Event "class" with fields such as start_date, end_date (can be null), recurrence_type (something like enum with possible values of EVERY_X_DAYS, DAY_OF_WEEK, DAY_OF_MONTH) and recurrence_value (say 7). When user adds new recurring event, I just create such Event in the database. When user wants to change 1 occurrence of this event, I add new entry to the DB of the type/class MovedEvent that "inherits" from Event with different date and has additional field related_to that points to the ID (or UUID, if you will) of the Event that it's related to. But at the same time, I need to keep track of all the MovedEvents (otherwise I'd have 2 events displayed in the same week), so I need to have an array moved_events of IDs that point to all MovedEvents.
Disadvantage: every time I want to display the calendar I need to get Event and select all events from the moved_events, which is not optimal if I'll have a lot of moved events.
2. Another idea is to store every event as a separate record. IMO it's a terrible idea, but I just mentioning it because it's a possibility. Disadvantages: every time I want to edit the main event (e.g: I want to change the event from occurring "every 7 days" to "every 9 days") I need to change every single occurrence of the event. "Exceptions" (changing single instance) is easier, though.
SQL/NoSQL? Scale details
I'm using PostgreSQL in my project, but I have basic knowledge in NoSQL databases and if they are better suited for this kind of a problem, I can use it.
Scale: Let's say I have 5k users, and each will have on average 150 events/week, 40% of which can be "exceptions". Therefore I want to design this system to be efficient.
Similar Questions & Other Resources
I've just started reading Martin Fowler's "Recurring Events for Calendars" (http://martinfowler.com/apsupp/recurring.pdf) but I'm not sure if it applies to my problem and if so, how one would design database schema according to this document (suggestions are welcome).
There are similar questions, but I didn't see any mention of "exceptions" (changing 1 event instance without affecting other), but maybe someone will find these links useful:
Design question: How would you design a recurring event system?
Optimal design for a Database with recurring event
Design option for 'recurring tasks'
Calendar Recurring/Repeating Events - Best Storage Method
What is the best way to represent "Recurring Events" in database?
Sorry for a long question, I wanted to describe the problem well. Yet, I feel that's pretty chaotic, so if you have additional questions, I will happily provide more details. Again, I'd like to hear about possible database/schema design ideas plus any other suggestions. Thank you!
Use iCalendar RRules and ExDates
If it's a recurring event, just store the start/end datetimes and RRules and ExDates for the event.
Use a Materialized View to pre-calculate upcoming actual events, say for the next 30 days or 365 days.
As you are using Postgres, you can use existing python, perl, or javascript RRule libraries (such as dateutil) inside pg function for calculating future events based on the rrules and exdates
UPDATE: check out pg_rrule extension: https://github.com/petropavel13/pg_rrule
for writing an offline client to the Google Reader service I would like to know how to best sync with the service.
There doesn't seem to be official documentation yet and the best source I found so far is this: http://code.google.com/p/pyrfeed/wiki/GoogleReaderAPI
Now consider this: With the information from above I can download all unread items, I can specify how many items to download and using the atom-id I can detect duplicate entries that I already downloaded.
What's missing for me is a way to specify that I just want the updates since my last sync.
I can say give me the 10 (parameter n=10) latest (parameter r=d) entries. If I specify the parameter r=o (date ascending) then I can also specify parameter ot=[last time of sync], but only then and the ascending order doesn't make any sense when I just want to read some items versus all items.
Any idea how to solve that without downloading all items again and just rejecting duplicates? Not a very economic way of polling.
Someone proposed that I can specify that I only want the unread entries. But to make that solution work in the way that Google Reader will not offer this entries again, I would need to mark them as read. In turn that would mean that I need to keep my own read/unread state on the client and that the entries are already marked as read when the user logs on to the online version of Google Reader. That doesn't work for me.
Cheers,
Mariano
To get the latest entries, use the standard from-newest-date-descending download, which will start from the latest entries. You will receive a "continuation" token in the XML result, looking something like this:
<gr:continuation>CArhxxjRmNsC</gr:continuation>`
Scan through the results, pulling out anything new to you. You should find that either all results are new, or everything up to a point is new, and all after that are already known to you.
In the latter case, you're done, but in the former you need to find the new stuff older than what you've already retrieved. Do this by using the continuation to get the results starting from just after the last result in the set you just retrieved by passing it in the GET request as the c parameter, e.g.:
http://www.google.com/reader/atom/user/-/state/com.google/reading-list?c=CArhxxjRmNsC
Continue this way until you have everything.
The n parameter, which is a count of the number of items to retrieve, works well with this, and you can change it as you go. If the frequency of checking is user-set, and thus could be very frequent or very rare, you can use an adaptive algorithm to reduce network traffic and your processing load. Initially request a small number of the latest entries, say five (add n=5 to the URL of your GET request). If all are new, in the next request,
where you use the continuation, ask for a larger number, say, 20. If those are still all new, either the feed has a lot of updates or it's been a while, so continue on in groups of 100 or whatever.
However, and correct me if I'm wrong here, you also want to know, after you've downloaded an item, whether its state changes from "unread" to "read" due to the person reading it using the Google Reader interface.
One approach to this would be:
Update the status on google of any items that have been read locally.
Check and save the unread count for the feed. (You want to do this before the next step, so that you guarantee that new items have not arrived between your download of the newest items and the time you check the read count.)
Download the latest items.
Calculate your read count, and compare that to google's. If the feed has a higher read count than you calculated, you know that something's been read on google.
If something has been read on google, start downloading read items and comparing them with your database of unread items. You'll find some items that google says are read that your database claims are unread; update these. Continue doing so until you've found a number of these items equal to the difference between your read count and google's, or until the downloads get unreasonable.
If you didn't find all of the read items, c'est la vie; record the number remaining as an "unfound unread" total which you also need to include in your next calculation of the local number you think are unread.
If the user subscribes to a lot of different blogs, it's also likely he labels them extensively, so you can do this whole thing on a per-label basis rather than for the entire feed, which should help keep the amount of data down, since you won't need to do any transfers for labels where the user didn't read anything new on google reader.
This whole scheme can be applied to other statuses, such as starred or unstarred, as well.
Now, as you say, this
...would mean that I need to keep my own read/unread state on the client and that the entries are already marked as read when the user logs on to the online version of Google Reader. That doesn't work for me.
True enough. Neither keeping a local read/unread state (since you're keeping a database of all of the items anyway) nor marking items read in google (which the API supports) seems very difficult, so why doesn't this work for you?
There is one further hitch, however: the user may mark something read as unread on google. This throws a bit of a wrench into the system. My suggestion there, if you really want to try to take care of this, is to assume that the user in general will be touching only more recent stuff, and download the latest couple hundred or so items every time, checking the status on all of them. (This isn't all that bad; downloading 100 items took me anywhere from 0.3s for 300KB, to 2.5s for 2.5MB, albeit on a very fast broadband connection.)
Again, if the user has a large number of subscriptions, he's also probably got a reasonably large number of labels, so doing this on a per-label basis will speed things up. I'd suggest, actually, that not only do you check on a per-label basis, but you also spread out the checks, checking a single label each minute rather than everything once every twenty minutes. You can also do this "big check" for status changes on older items less often than you do a "new stuff" check, perhaps once every few hours, if you want to keep bandwidth down.
This is a bit of bandwidth hog, mainly because you need to download the full article from Google merely to check the status. Unfortunately, I can't see any way around that in the API docs that we have available to us. My only real advice is to minimize the checking of status on non-new items.
The Google API hasn't yet been released, at which point this answer may change.
Currently, you would have to call the API and dis-regard items already downloaded, which as you said isn't terribly efficient as you will be re-downloading items every time, even if you already have them.