Send keys with modifiers in Capybara/Poltergeist - phantomjs

So Poltergeist send_keys let you do this:
element = find('input#id')
element.native.send_key('String')
element.native.send_keys('H', 'elo', :Left, 'l') # => 'Hello'
element.native.send_key(:Enter) # triggers Enter key
I'm looking to send key combinations like:
Control-A
Alt-C
Can't find any references or had any success with various attempts.
Suggestions?

According to Issue #420 and the accompanying commit, you can do it in the following way:
element.native.send_keys('H', [:Shift, 'elo'], :Left, 'l')
element.native.send_key([:Ctrl, :Enter])
You can define multiple modifiers like this:
[:Ctrl, :Shift, "aaa"]
There is currently no release containing that change (last one is 1.6.0), so you will need to build it yourself.

Related

Odoo: how to set default field for many2many field by id?

i want to set default value in many2many field for example:
that field in models.py:
# alarms
alarm_ids = fields.Many2many(
'calendar.alarm', 'calendar_alarm_calendar_event_rel',
string='Reminders', ondelete="restrict",
help="Notifications sent to all attendees to remind of the meeting.")
also it is default values created by system, and i want first variant by default:
i know that i can set it by id, but dont know how.
You can use Command to set X2many field values.
From the documentation:
Via Python, we encourage developers craft new commands via the various functions of this namespace. We also encourage developers to use the command identifier constant names when comparing the 1st element of existing commands.
Example:
from odoo import Command, models, fields
class CalendarEvent(models.Model):
_inherit = 'calendar.event'
alarm_ids = fields.Many2many(default=lambda self: [Command.link(self.env.ref('calendar.alarm_notif_1').id)])
i just watch in postgres, there is no id`s like in xml screenshots, i just take id of a record and find some documentation.
next example works:

Splunk Host header overrides host key from log messages

How can I stop Splunk considering hostname "host" more important than "host" key?
Let's suppose that I have the following logs:
color = red ; host = localhost
color = blue ; host = newhost
The following query works fine:
index=myindex | stats count by color
but the following doesn't:
index=myindex | stats count by host
because instead of considering "host" being the key from the log, it sees the Host header as "host".
How can I deal with this?
When there are two fields with the same name one of them has to "win". In this case, it's the one Splunk defines before it processes the event itself. As you probably know, every event is given 4 fields at input time: index, host, source, and sourcetype. Data from the event won't override these unless specifically told to do so in the config files.
To override the settings, put this in your transforms.conf file
[sethost]
REGEX = host\s*=\s*(\w+)
DEST_KEY = MetaData:Host
FORMAT = host::$1
You'll also need to reference the transform in your props.conf file
[mysourcetype]
TRANSFORMS-host = sethost
I would have thought this solution would be more prominent, but I found it buried deep in the Splunk docs.
https://docs.splunk.com/Documentation/Splunk/8.2.6/Metrics/Search
You can use reserved fields such as "source", "sourcetype", or "host" as dimensions. However, when extracted dimension names are reserved names, the name is prefixed with "extracted_" to avoid name collision. For example, if a dimension name is "host", search for "extracted_host" to find it.
So, in your case:
index=myindex | stats count by extracted_host

Add field/string length to logstash event

I'm trying to add a string length field to an index. Ideally, I'd like to use the kibana script feature as I can 'add' this field later but I keep getting a null_pointer_exception with the following code... I'm trying to sort in a visualization based on the fields length.
doc['field'].value ? doc['field'].length() : 0
Is this correct?
I thought it was because my field isn't always set (sparse data), but I added the ?:0 to combat that (which didn't work)
Any ideas?
You can define an scripted field in Kibana, of type int, language painless, and try this:
return (doc['field'].value != null? doc['field'].value.length(): 0);

Redis: How to increment hash key when adding data?

I'm iterating through data and dumping some to a Redis DB. Here's an example:
hmset id:1 username "bsmith1" department "accounting"
How can I increment the unique ID on the fly and then use that during the next hmset command? This seems like an obvious ask but I can't quite find the answer.
Use another key, a String, for storing the last ID. Before calling HMSET, call INCR on that key to obtain the next ID. Wrap the two commands in a MULTI/EXEC block or a Lua script to ensure the atomicity of the transaction.
Like Itamar mentions you can store your index/counter in a separate key. In this example I've chosen the name index for that key.
Python 3
KEY_INDEX = 'index'
r = redis.from_url(host)
def store_user(user):
r.incr(KEY_INDEX, 1) # If key doesn't exist it will get created
index = r.get(KEY_INDEX).decode('utf-8') # Decode from byte to string
int_index = int(index) # Convert from string to int
result = r.set('user::%d' % int_index, user)
...
Note that user::<index> is an arbitrary key chosen by me. You can use whatever you want.
If you have multiple machines writing to the same DB you probably want to use pipelines.

Add custom columns to delayed_jobs table

Can someone show me how to extend the delayed_jobs gem to allow me to add a couple custom columns?
I added a couple columns but when I try to 'cleanly' use them I get:
Can't mass-assign protected attributes: owner_type, owner_id
So I need to add the columns to cattr_accessor:
module Delayed
class Worker
DEFAULT_SLEEP_DELAY = 5
DEFAULT_MAX_ATTEMPTS = 25
DEFAULT_MAX_RUN_TIME = 4.hours
DEFAULT_DEFAULT_PRIORITY = 0
DEFAULT_DELAY_JOBS = true
DEFAULT_QUEUES = []
DEFAULT_READ_AHEAD = 5
cattr_accessor :min_priority, :max_priority, :max_attempts, :max_run_time,
:default_priority, :sleep_delay, :logger, :delay_jobs, :queues,
:read_ahead, :plugins, :destroy_failed_jobs, **:owner_id, :owner_type**
However, not sure the best way to extend this. My guess/attempt is to create a file and add it to the initializers directory. However, for some reason it didn't work.
Any tips appreciated.
Do you really need to extend the delayed_jobs table? My approach has been to leave it alone and use one of two techniques:
add owner_id and owner_type fields the the object being queued.
create a separate table with a :belongs_to relationship to delayed_jobs. Then you use DJ's hooks to keep the two in synch through the lifetime of the job.
The first approach is simpler, but isn't right for every situation. Would either of those work for you?
The other answers may be useful, but they are not answering the question. To add custom columns to the delayed_jobs table, you can follow this steps. I did so and successfully created an association between Delayed::Job and other objects.
Another option is to simply add the following line in your initializer (ex. config/initializers/delayed_job.rb):
Delayed::Job.attr_accessible :owner_type, :owner_id