Need to run these sampler request in parallel with same csv file provided
CSV have LN,Userid columns that with three values
I want to run all three samplers at same time for each values. So total would be 9 requests ( 3 values ins CSV)
Submit1|Submit2|Submit3 (all should run in parallel for same userid's in csv)
(for value Userid1) | for value Userid2 | Userid3
Submit1 | Submit1 | Submit1
Submit2 | Submit2 | Submit2
Submit3 | Submit3 | Submit3
Did able to manage by setting csv for all Threads and Recycle on EOF = True and Stop at EOF to False.
When used No of threads to 3, It reaches 18 instead was expecting 9 only
In RFC 8285, which deals with RTP Header Extensions, the structure for a 1-byte header extension is as shown below (Section 4.2):
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0xBE | 0xDE | length=3 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| ID | L=0 | data | ID | L=1 | data...
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
...data | 0 (pad) | 0 (pad) | ID | L=3 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| data |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
I understand the OxBEDE which is explained in the RFC. Then comes the "length=3" bits which are followed by the actual extensions. Each extension consists of the ID followed by length. A similar structure is defined for two-byte header extensions.
In both types of headers, I do not understand the "length=3" bits section. Is it just padding used for 32-bit boundary? If so, what purpose does this serve? Ease in parsing? Why not have extension elements started immediately after the xBEDE. Certainly would have been space efficient.
May be I am missing something basic.
This probably dates back to RFC 3550. Specifying the length field explicitly like this allows clients that do not understand extensions to skip them more easily.
Also note that until extended by RFC 5285 (updated by 8285) there could only be a single extension so what you see is a backward compability hack.
Problem,
on ActiveMQ for some reasons(I dont know why) the ActiveMQ.Advisory.TempQueue is getting bigger and bigger (1GB per day).
here is a snapshot:
Name Producer # Consumer # Enqueue # Dequeue # Memory % Dispatch # Always retroactive Average blocked time Average enqueue time Average message size Blocked producer warning interval Blocked sends Dlq Expired count Forward count In flight count Max audit depth Max enqueue time Max message size Max page size Max producers to audit Memory limit Memory usage byte count Memory usage portion Min enqueue time Min message size Options Prioritized messages Producer flow control Queue size Slow consumer strategy Store message size Total blocked time Use cache Object name
ActiveMQ.Advisory.TempQueue | 0 | 816 | 187550135 | 0 | 0 | 187836323 | FALSE | 0 | 0.3694736 | 1024 | 30000 | 0 | FALSE | 0 | 0 | 187836323 | 2048 | 1233 | 1024 | 200 | 1024 | 668309914 | 0 | 1 | 0 | 1024 | FALSE | TRUE | 0 | 0 | 0 | TRUE | org.apache.activemq:type=Broker,brokerName=localhost,destinationType=Topic,destinationName=ActiveMQ.Advisory.TempQueue
Any idea?
Advisory Topics in ActiveMQ don't accumulate data, they are Topics and as such when there are no consumers on the Topics the messages sent to them are dropped. If you have a consumer on the advisory Topic then message pass through it but are not stored in the persistent storage on the broker. The stats can sometimes be deceiving given the enqueue count keeps ticking up.
Without knowing more about what you are seeing there's not much more help that can be offered.
If you are seeing growth in the KahaDB logs then it is unrelated to your Advisory Topics as I've stated they don't store messages ever so there is something else going on. There's some nice instructions on the ActiveMQ WebSite on how to take a look at what is keeping the KahaDB journal files alive which you should use to help debug your issue.
I have event data from an app that helps tell me what people are doing inside my app.
userID|timestamp |name | value |
A | 1 |Launch | 23 |
A | 3 |ClickButton| Header|
B | 2 |Launch | 10 |
B | 5 |ClickBanner| ad |
etc
I am defining a Session as anytime someone has been out of the app for more than 5 minutes, the next entry is a new session. So if you come back in after 4 minutes, it is still the same session.
I use a lag to select the previous launch timestamp, add the value of time in seconds for that and then take the difference for the next launch. So I can select the first timestamp for each 'Session'
Now I need to map each non Launch event back to the session it is a part of so I can easily analyze things such as 'What percent of sessions include an ad click?'
I'm pulling my data using HIVE and am not having success finding an efficient way to do this as my dataset is fairly large.
I'v got the folowing schema
+----+------+------+-----------+---------------------+--------+
| id | from | to | message | timestamp | readed |
+----+------+------+-----------+---------------------+--------+
| 46 | 2 | 6 | 123 | 2013-11-19 19:12:19 | 0 |
| 44 | 2 | 3 | 123 | 2013-11-19 19:12:12 | 0 |
| 43 | 2 | 1 | ????????? | 2013-11-19 18:37:11 | 0 |
| 42 | 1 | 2 | adf | 2013-11-19 18:37:05 | 0 |
+----+------+------+-----------+---------------------+--------+
from/to is the ID of the user's, message – obviously, the message, timestamp and read flag.
When user open's his profile I want him to see the list of dialogs he participated with last message in this dialog.
To find a conversation between 2 people I wrote this code, it's simple (Message model):
def self.conversation(from, to)
where(from: [from, to], to: [from, to])
end
So, I can now sort the messages and get the last one. But it's not cool to fire a lot of queries for each dialog.
How could I achieve the result I'm looking for with less queries?
UPDATE:
Ok, looks like it's not really clear, what I'm trying to achieve.
For example, 4 users – Kitty, Dandy, Beggy and Brucy used that chat.
When Brucy entered in dialogs, she shall see
Beggy: hello brucy haw ar u! | <--- the last message from beggy
-------
Dandy: Hi brucy! | <---- the last message from dandy
--------
Kitty: Hi Kitty, my name is Brucy! | <–– this last message is from current user
So, three separated dialogs. Then, Brucy can enter anyone dialog to continue private conversation.
And I can't figured out how could I fetch this records without firing a query for each dialog between users.
This answer is a bit late, but there doesn't seem to be a great way to do this, in Rails 3.2.x at least.
However, here is the solution I came up with
(as I had the same problem on my website).
#sender_ids =
Message.where(recipient_id: current_user.id)
.order("created_at DESC")
.select("DISTINCT owner_id")
.paginate(per_page: 10, page: params[:page])
sql_queries =
#sender_ids.map do |user|
user_id = user.owner_id
"(SELECT * FROM messages WHERE owner_id = #{user_id} "\
"AND recipient_id = #{current_user.id} ORDER BY id DESC "\
"LIMIT 1)"
end.join(" UNION ALL ")
#messages = Message.find_by_sql(sql_queries)
ActiveRecord::Associations::Preloader.new(#messages, :owner).run
This gets the last 10 unique people you sent messages to.
For each of those people, it creates a UNION ALL query to get the last message sent to each of those 10 unique people. With 50, 000 rows, the query completes in about ~20ms. And of course to get assocations to preload, you have to use .includes will not work when using .find_by_sql
def self.conversation(from, to)
order("timestamp asc").last
end
Edit:
This railscast will be helpful..
http://railscasts.com/episodes/316-private-pub?view=asciicast
EDIT2:
def self.conversation(from, to)
select(:from, :to, :message).where(from: [from, to], to: [from, to]).group(:from, :to, :country).order("timestamp DESC").limit(1)
end