my task is to retrieve a set of messages from GMail via POP3 (no IMAP).
I can do RETR MSG #, and it is prohibited to delete.
Fetchmail and procmail are constantly trying to download a same set of new, unread messages (this part goes to ServerFault). Is there a header specially designed to distinguish previously read messages? Or I should do checksum of message body/subject/date?
The POP3 protocol doesn't support a read/seen sort of flag. Some servers support a non-standard header like X-Seen which acts like a read flag, you'll have to use the TOP to get a message's headers and see if it's been set (well more to determine if it's even there).
It's supposed to be up to the client to cover read flags in POP3, but the good news is you don't need to do a checksum, just use the UIDL which will give you a list of non-changing, unique IDs for the messages in the inbox, or if called with a message # will give you the unique ID for the message in that position in the mailbox (since you can't guarantee message position in the mailbox if other client's are accessing and may be deleting).
Try to control messages with message-id
Message message ;
....
String messageId = message.getHeader("message-id")[0];
Related
The telegram documentation states:
Receipt of virtually all messages (with the exception of some purely
service ones as well as the plain-text messages used in the protocol
for creating an authorization key) must be acknowledged. This requires
the use of the following service message (not requiring an
acknowledgment):
msgs_ack#62d6b459 msg_ids:Vector long = MsgsAck;
This thread alludes to sending acks back to the server but not the mechanism by which those acks are sent. I attempted sending a MsgsAck and a msgs_ack to the server but they failed because those are data types, not constructors (methods). This leads me to two questions:
How does a telegram client send acks back to the server? (both individually and as part of a method call)
How does a telegram client differentiate between server responses that require an ack and those who don't? (it appears responses that include a req_msg_id require an ack, but I'd like confirmation)
The simple way to go about this is:
1) accumulate the msg_ids that you receive for from the server - those that need to be acknowledged as indicated in the documentation: these are all content related messages, not service messages
2) Every time you want to send new messages to the server, you could include your accumulated acknowledgment messages in a message container along with the messages you intend to send.
3) If you have accumulated msg_ids to be acknowledged for over a period say X minutes, without an opportunity to clear them via step 2) above, then you can simply send an acknowledgment message back to telegram wit the list of msg_ids to be acknowledged.
To send an acknowledgement use this:
msgs_ack#62d6b459 msg_ids:Vector<long> = MsgsAck;
Hi guys,
I'm looking for a method to count a number of matching messages in ActiveMQ. Here is the code for adding a message:
ObjectMessage myBeanMessage = session.createObjectMessage();
myBeanMessage.setObject(myBean);
myBeanMessage.setStringProperty("myProperty", myBean.getProperty());
producer.send(myBeanMessage);
Now I want to count the number of message in myQueue whose myProperty = "a property String", for example, but do not found any QueueMBean api:
http://activemq.apache.org/maven/5.7.0/activemq-core/apidocs/org/apache/activemq/broker/jmx/QueueViewMBean.html
The only api that maybe satisfy my requirement is
int copyMatchingMessagesTo(String selector, String destinationName) throws Exception
Copies the messages matching the given selector
Returns: the number of messages copied
Throws: Exception
which mean that I have to copy all matching messages to another queue and count the number of messages copied.
However, I feel that to copy all matching message just to find how many messages are copied is somehow "unneccessary waste of resource".
So, is there any way that I can count the number of matching messages directly?
Thanks
well, you can always browse the queue using that selector, and count the messages browsed. That means you will read the messages to a client, but not really copying them to another queue. If that is more performant/resource friendly in your scenario or worse, I cannot tell.
The use case seems to tread into the realm of ActiveMQ (or JMS in general) anti-patterns.
You need to remember that a Queue is not a database and treating it as such will, most likely end in tears. In JMS you have the option of using a Queue Browser with selector to browse the Queue contents. You must tread carefully there however as the spec only states that a browser will give you a snapshot of the Queue with no guarantee that you will browse all message. In ActiveMQ when you browse a Queue you can only browse what will fit into memory and then the browse stops so your count could be way off for really deep Queues.
I have some camel routes with mina sockets and jetty websockets. I am able to broadcast a message to all the clients connected to the websocket but how do i send a message to a specific endpoint. How do i maintain a list of all connected clients with a client id as reference so i can route to a specific client. Is that possible? Will i be able to mention a dynamic client in the to URI?
Or maybe i am thinking about this wrong and i need to create topics on active mq and have the clients subscribe to it. That would mean that i create a topic for every websocket client? and route the message to the right topic.
Am i atleast on the right track here, any examples you can point out? Google was not helpful.
The approach you take depends on how sensitive the client information is. The downside of a single topic with selectors is that anyone can subscribe to the topic without a selector and see all the information for everyone - not usually something that you want to do.
A better scheme is to use a message distribution mechanism (set of Camel routes) that act as an intermediary between the websocket clients and the system producing the messages. This mechanism is responsible for distributing messages from a single destination to client-specitic destinations. I have worked on a couple of banking web front-ends that used a similar scheme.
In order for this to work you first generate for each user a distinct token/UUID; this is presented to the user when the session is established (usually through some sort of profile query/message).
It's essential that the UUID can be worked out as a hash of the clientId rather than being stored in a DB, as it will be used all the time and you want to make sure this is worked out quickly.
The user then uses that information to connect to specific topics that use that UUID as a suffix. For example two users subscribing to an orderConfirmation topic would each subscribe to their own version of that topic:
clientA -> orderConfirmation.71jqsd87162iuhw78162wd7168
clientB -> orderConfirmation.76232hdwe7r23j92irjh291e0d
To keep track of "presence", your clients would need to periodically send a heartbeat message containing their clientId to a well-known topic that your distribution mechanism listens on. Clients should not be able to subscribe to this topic for reads (see ActiveMQ Security). The message distribution mechanism needs to keep in memory a data structure that contains the clientId and the time a heartbeat was last seen.
When a message is received by the distribution mechanism, it checks whether the clientID for which it received the message has a "live/present" session, determines the UUID for the client, and broadcasts the message on the appropriate topic.
Over time this will create a large number of topics on your broker that you don't want hanging around when the user has gone away. You can configure ActiveMQ to delete these if they have been inactive for some time.
You definitely do not want to create separate endpoint for each client.
Topic and a subscription with selector is an elegant way to resolve it.
I would say the best one.
You need single topic, which every client would subscribe to with the selector looking like where clientId in ('${myClientId}', 'EVERYONE'). Now when you want to publish a message to specific client, you set a property clientId to the id of this client. If you want to broadcast, you set it to 'EVERYONE'
I hope I understand the problem right...
I am using Peter Huber's POP3 client to connect to gmail and download messages.
The inboxes being accessed are transactional inboxes used only for code-access. That is, a message comes in with a order file attached, code will process it and then delete the message. One stipulation of the code though was a DEBUG flag, which if set would prevent the code from deleting the message so that you can run the program again later without the debug flag and reprocess the message. So, in my code I have
If Not Arguments.Debug Then pop.DeleteEmail(eid)
This works fine. Problem is, even when not deleting the message, running the program a second time will not re-retrieve the message, even though if I login to gmail and look at the inbox, it is still there. The only way I can get the program to see the message again is to forward the message back to the same inbox. But in Peter's code I do not see anywhere where he is keeping track of seen messages between sessions.
Is this something that is done on gmail's end? Refusing to deliver a message to the same client a second time? If so, is there any way I can change my gmail account so that it will always show all messages in the inbox to a client when retrieving the list of messages, even ones already "seen"? I don't see anything in the gmail settings screen.
UPDATE: I tried adding a method to send a RSET command to the server, as per this comment on the codeproject page. I then call my new Reset() method after retrieving my messages but before disconnecting, ... but I still have the same problem.
Okay... found a "sort of" answer after reading through pages of the comments on the codeproject project.
According to this comment, the RSET command does not actually do anything when you are dealing with gmail's servers.
The "answer" is to prepend your username with the string "recent:", so instead of logging in with [myaccount#gmail.com] you log in with [recent:myaccount#gmail.com]. Rather hackish, ... but it works.
In our scenario I'm thinking of using the pub sub technique. However I don't know which is the better option.
1 ########
A web service of ours will publish a message that something has happened when it is called externally, ExternalPersonCreatedMessage!
This message will contain a field that represents the destinations to process the message into (multiple allowed).
Various subscribers will subscribe. These subscribers will filter the message to see if any action is required by checking the destination field.
2 ########
A web service of ours will parse the incoming call and publish specific types of messages depending on the destinations supplied in the field. i.e. many Destination[n]PersonCreatedMessage messages would be created.
Subscribers will subscribe to only the specific message they care for. i.e. not having to filter any messages
QUESTIONS
Which of the above is the better option and why? And how do I stop myself from making RequestMessages. From what I've read/seen I should be trying to structure this in a way of PersonCreated, PersonDeleted i.e. SOMETHING HAS HAPPENED and NOT in the REQUEST SOMETHING TO HAPPEN form such as CreatePerson or DeletePerson
Are my thoughts correct? I've been looking for guidance on how to structure messages and making sure I don't go down a wrong path but have found no guidance out there on do's and dont's. Can any one help and guide? I want to try and get this correct from the off :)
Based on the integration scenario in the referenced article, it appears to me that you may need a Saga to complete the workflow of accept message -> operate on message -> send confirmation. In the case that the confirmation is sent immediately after the operation, you could use NSBs message handler pipeline feature which allows you to chain handlers in a specified sequence such as...
First<FilterHandler>.Then<DoWorkHandler>().AndThen<SendConfirmationHandler>();
In terms of the content filtering, you can do this although you incur some transport overhead, meaning the queue will have to accept the message and the process will always call the first handler on every message(you can short-circuit the above pipeline at any point). It may be the case that what you really want is a Distributor/Worker setup where all Workers are the same and you can handle some load.
If you truly have different endpoints with completely different logic, then I would have the Publisher process(only accepts and Publishes message) do the work of translating the inbound message to something else a Subscriber can then be interested in. If then you find that a given Published message only ever has 1 Subscriber, then you don't need to Publish at all, you need to just Bus.Send() to the correct endpoint.
The way NServiceBus handles pub-sub is more like your option two.
A publisher service has an input queue and a subscription store.
A subscriber service has an input queue
The subscriber, on start-up will send a subscription message to the input queue of the publisher
The subscription message contains the type of message subscriber is interested in and the subscribers queue address
The publisher records the subscription in the subscription store.
The publisher receives a message.
The publisher evaluates the message type against the list of subscriptions
For each match found the publisher sends the message to the queue address.
In my opinion, you should stop thinking about destinations. Messages are messages. They should not have any inherent destination information in them. The subscription mechanism defines the addressing/routing requirements for the solution.