Is it possible to pass from server select values and labels.
It would be nice to be able pass to pass id's and string representations to select2.
For example, I want to pass [[1, 2, 3, 4, 5], ["name1", "name2" ..., "name5"].
Just like in the client side setup of yadcf you can send from server an array of objects with value / label
See in Server side source example inspect the dev tools -> Network - > entrys_table_server_side_source, look at the yadcf_data_0
yadcf_data_0: [{value: "Trident", label: "Trident Eng'"},
{value: "Tasman", label: "Tasman Eng'"},…]
Related
I'm working on a script that gets its data from the Jira REST API and presents relevant details to the user as a table in an Excel file generated by XlsxWriter. Ideally, I would like the first column to display a hyperlink that leads the user back to the page in Jira where the information originated. Preferably, I would do that by creating hyperlinks that use just the issue key as the link text (rather than the whole URL).
The Working with Worksheet Tables documentation provides the following example of how to pass data into tables:
data = [
['Apples', 10000, 5000, 8000, 6000],
['Pears', 2000, 3000, 4000, 5000],
['Bananas', 6000, 6000, 6500, 6000],
['Oranges', 500, 300, 200, 700],
]
worksheet.add_table('B3:F7', {'data': data})
The write_url() method documentation also provides these examples:
worksheet.write_url(0, 0, 'https://www.python.org/')
worksheet.write_url('A2', 'https://www.python.org/')
What I would like to do, though, is provide the hyperlink details as part of the data list. In the example above, I'm envisioning the hyperlink details taking the place of the Apples, Pears, Bananas and Oranges strings (such that each might have link text like 'KEY-1' associated with a URL like 'https://jiraserver/browse/KEY-1' and so on). Is there a convenient way to do that?
Ah, rather than relying on XlsxWriter for this purpose, I see that the HYPERLINK function in Excel will provide the desired effect.
I simply need to provide something like the following at the desired position in the list.
issue_hyperlink = f'=HYPERLINK("{issue_url}", "{issue_key}")'
This will work for what I have in mind.
Here is one way to do it:
import xlsxwriter
workbook = xlsxwriter.Workbook('table_with_links.xlsx')
worksheet = workbook.add_worksheet()
# Some sample data for the table.
data = [
['KEY-1', 10000, 5000, 8000, 6000],
['KEY-2', 2000, 3000, 4000, 5000],
['KEY-3', 6000, 6000, 6500, 6000],
['KEY-4', 500, 300, 200, 700],
]
# Set the columns widths.
worksheet.set_column('A:E', 10)
# Add a table to the worksheet.
worksheet.add_table('A1:E5')
# Write the data to the table.
for row_num, row_data in enumerate(data):
for col_num, col_data in enumerate(row_data):
if col_num == 0:
worksheet.write_url(row_num + 1, col_num,
f'https://jiraserver/browse/{col_data}',
None, col_data)
else:
worksheet.write(row_num + 1, col_num, col_data)
workbook.close()
Output:
I prefer using the real hyperlink instead of the HYPERLINK formula because I think it looks less confusing to the end user.
I have a table wherein i uploaded the file with a column converted to json. but when im tryin to upack it using KQL. it does not work.
Id Query Name Workitem Id Logged Date Details
0 Bug Stats 111 2022-06-08T02:26:43.111196Z {'AssignedTo': 'me', 'ClosedDate': None, 'CreatedDate': '2022-03-08T19:28:15.673Z', 'StartDate': None, 'State': 'For Review', 'Tags': 'tags', 'Title': 'Title', 'WorkItemType': 'Bug'}
Returns nothing
Datatable
|extend Details = parse_json(Details) //i tried todynamic() as well but same response
|evaluate bag_unpack(Details)
Returns Error:
Datatable
|evaluate bag_unpack(Details)
ERROR:
Semantic error: evaluate bag_unpack(): the following error(s) occurred while evaluating the output schema: evaluate bag_unpack(): argument #1 expected to be a reference to a dynamic column. Query: 'DevopsQueriesTest |evaluate bag_unpack(Details) '
you do need to first invoke parse_json() on Details, for creating a dynamic value out of a string value.
see: https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/parsejsonfunction
the input string you have isn't a valid JSON payload - it uses single quotes instead of double quotes.
it's best that you fix the component that generates the data, and ingest valid payloads instead of invalid ones.
if, for whatever reason, you can't/won't do so - and you prefer paying a performance hit as part of your queries, you can use the translate() function, for example.
see: https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/translatefunction
Moreover, the None in your payload is invalid too - it needs to be encapsulated with double quotes for the JSON payload to become valid.
you can use the replace_string() function to fix that at query runtime.
see: https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/replace-string-function
print s = "{'AssignedTo': 'me', 'ClosedDate': None, 'CreatedDate': '2022-03-08T19:28:15.673Z', 'StartDate': None, 'State': 'For Review', 'Tags': 'tags', 'Title': 'Title', 'WorkItemType': 'Bug'}"
| project s = parse_json(replace_string(translate("'", '"', s), "None", '"None"'))
| evaluate bag_unpack(s)
AssignedTo
ClosedDate
CreatedDate
StartDate
State
Tags
Title
WorkItemType
me
None
2022-03-08 19:28:15.6730000
None
For Review
tags
Title
Bug
OK so I have been banging my head at this problem for way too long by now.
I want to sync stock levels of a product that is tracked with lots between the webshop and Odoo. For this reason I need to be able to make a stock adjustment of a lot via the API (in this case in python).
I have found this possible way of doing it:
odoo(
'stock.move',
'create',
[{
"name": "Webshop stock adjustment",
"company_id": 1,
"location_id": 8, # warehouse
"location_dest_id": 14, # virtual location
"product_id": batch["product_id"][0],
"product_uom": 1,
"lot_ids": [batch["id"]], # I am searching for the id by the lot name beforehand
"product_uom_qty": 1,
"quantity_done": 1,
"state": "done"
}]
)
This, however, results in two moves! One move which has the correct lot, and another one without a specified lot. The latter move is faulty of course, as the product is tracked with lots. This results in a fault lot entry, where I can't change the quantity by hand, as the field is invalid. Worse, it results in wrong stock levels.
You can see the problematic bookings here
I have tried to just create a stock.move.line, like so:
odoo(
'stock.move.line',
'create',
[{
"company_id": 1,
"display_name": "Webshop adjustment", # does not appear
"location_id": location_id,
"location_dest_id": location_dest_id,
"product_id": batch["product_id"][0],
"product_uom_id": 1,
"lot_id": batch["id"],
"product_uom_qty": quantity,
"qty_done": quantity,
"state": "done" # has no effect
}]
)
However that results in a line with no effect: Line
I have also tried to find the stock adjustment wizard, but the only one I found in the code as opposed to the UI, doesn't have a field for lots..
I'd be happy for any input on how to solve this problem!
Meanwhile I managed to solve this problem reliably. I needed to implement a function for that, rather than mucking around with the external API.
The function here is expecting vals with the format below. It reduces whatever batch needs to go first.
[{
'sku': sku,
'qty': quantity
},]
#api.model
def reduce_lots(self, vals):
log(vals)
for product_req in vals:
product = self.env['product.product'].search(
[['default_code','=', product_req['sku']]]
)
if len(product) == 0:
continue
lots = self.env['stock.quant'].search(
['&',('product_id', '=', product[0]['id']),('on_hand', '=', True)],
order='removal_date asc'
)
move = self.env['stock.move'].create({
'name': product_req['order'],
'location_id': 8, # Our Warehouse
'location_dest_id': 14, # Virtual Location, Customer. If you need to increase stock, reverse the two numbers.
'product_id': product.id,
'product_uom': product.uom_id.id,
'product_uom_qty': product_req['qty'],
})
move._action_confirm()
move._action_assign()
product_req['lots'] = []
for line in move.move_line_ids:
line.write({'qty_done': line['product_uom_qty']})
product_req['lots'].append({
'_qty': line['product_uom_qty'],
'_lot_id': line.lot_id.name,
'_best_before': line.lot_id.removal_date
})
move._action_done()
return vals
My goal in writing a function is to allow callers to pass in the same condition arguments they would to a where call in ActiveRecord, and I want the corresponding Rails-generated SQL.
Example
If my function receives a hash like this as an argument
role: 'Admin', id: [4, 8, 15]
I would expect to generate this string
"users"."role" = 'Admin' AND "users"."id" IN (4, 8, 15)
Possible Solutions
I get the closest with to_sql.
pry(main)> User.where(role: 'Admin', id: [4, 8, 15])
=> "SELECT \"users\".* FROM \"users\" WHERE \"users\".\"role\" = 'Admin' AND \"users\".\"id\" IN (4, 8, 15)"
It returns almost exactly what I want; however, I would be more comfortable not stripping away the SELECT ... WHERE myself in case something changes in the way the SQL is generated. I realize the WHERE should always be there to split on, but I'd prefer an even less brittle approach.
My next approach was using Arel's where_sql function.
pry(main)> User.where(role: 'Admin', id: [4, 8, 15]).arel.where_sql
=> "WHERE \"users\".\"role\" = $1 AND \"users\".\"id\" IN (4, 8, 15)"
It gets rid of the SELECT but leaves the WHERE. I would prefer it to the above if it had already injected the sanitized role, but that renders it quite a bit less desirable.
I've also considered generating the SQL myself, but I would prefer to avoid that.
Do any of you know if there's some method right under my nose I just haven't found yet? Or is there a better way of doing this altogether?
Ruby 2.3.7
Rails 5.1.4
I too would like to know how to get the conditions without the leading WHERE. I see in https://coderwall.com/p/lsdnsw/chain-rails-scopes-with-or that they used string manipulation to get rid of the WHERE, which seems messy but maybe the only solution currently. :/
scope.arel.where_sql.gsub(/\AWHERE /i, "")
I'm using #Query from the spring data package and I want to query on the last element of an array in a document.
For example the data structure could be like this:
{
name : 'John',
scores: [10, 12, 14, 16]
},
{
name : 'Mary',
scores: [78, 20, 14]
},
So I've built a query, however it is complaining that "error message 'unknown operator: $slice' on server"
The $slice part of the query, when run separately, is fine:
db.getCollection('users').find({}, {scores: { $slice: -1 })
However as soon as I combine it with a more complex check, it gives the error as mentioned.
db.getCollection('users').find{{"$and":[{ } , {"scores" : { "$slice" : -1}} ,{"scores": "16"}]})
This query would return the list of users who had a last score of 16, in my example John would be returned but not Mary.
I've put it into a standard mongo query (to debug things), however ideally I need it to go into a spring-data #query construct - they should be fairly similar.
Is there anyway of doing this, without resorting to hand-cranked java calls? I don't see much documentation for #Query, other than it takes standard queries.
As commented with the link post, that refers to aggregate, how does that work with #Query, plus one of the main answers uses $where, this inefficient.
The general way forward with the problem is unfortunately the data, although #Veeram's response is correct, it will mean that you do not hit indexes. This is an issue where you've got very large data sets of course and you will see ever decreasing return times. It's something $where, $arrayElemAt cannot help you with. They have to pre-process the data and that means a full collection scan. We analysed several queries with these constructs and they involved a "COLSCAN".
The solution is ideally to create a field that contains the last item, for instance:
{
name : 'John',
scores: [10, 12, 14, 16],
lastScore: 16
},
{
name : 'Mary',
scores: [78, 20, 14],
lastScore: 14
}
You could create a listener to maintain this as follows:
#Component
public class ScoreListener extends AbstractMongoEventListener<Scores>
You then get the ability to sniff the data and make any updates:
#Override
public void onBeforeConvert(BeforeConvertEvent<Scores> event) {
// process any score and set lastScore
}
Don't forget to update your indexes (!):
#CompoundIndex(name = "lastScore", def = "{"
+ "'lastScore': 1"
+ " }")
Although this does contain a disadvantage of a slight duplication of data, in current Mongo (3.4) this really is the only way of doing this AND to include indexes in the search mechanism. The speed differences were dramatic, from nearly a minute response time down to milliseconds.
In Mongo 3.6 there may be better ways for doing that, however we are fixed on this version, so this has to be our solution.