I'm working on a script that gets its data from the Jira REST API and presents relevant details to the user as a table in an Excel file generated by XlsxWriter. Ideally, I would like the first column to display a hyperlink that leads the user back to the page in Jira where the information originated. Preferably, I would do that by creating hyperlinks that use just the issue key as the link text (rather than the whole URL).
The Working with Worksheet Tables documentation provides the following example of how to pass data into tables:
data = [
['Apples', 10000, 5000, 8000, 6000],
['Pears', 2000, 3000, 4000, 5000],
['Bananas', 6000, 6000, 6500, 6000],
['Oranges', 500, 300, 200, 700],
]
worksheet.add_table('B3:F7', {'data': data})
The write_url() method documentation also provides these examples:
worksheet.write_url(0, 0, 'https://www.python.org/')
worksheet.write_url('A2', 'https://www.python.org/')
What I would like to do, though, is provide the hyperlink details as part of the data list. In the example above, I'm envisioning the hyperlink details taking the place of the Apples, Pears, Bananas and Oranges strings (such that each might have link text like 'KEY-1' associated with a URL like 'https://jiraserver/browse/KEY-1' and so on). Is there a convenient way to do that?
Ah, rather than relying on XlsxWriter for this purpose, I see that the HYPERLINK function in Excel will provide the desired effect.
I simply need to provide something like the following at the desired position in the list.
issue_hyperlink = f'=HYPERLINK("{issue_url}", "{issue_key}")'
This will work for what I have in mind.
Here is one way to do it:
import xlsxwriter
workbook = xlsxwriter.Workbook('table_with_links.xlsx')
worksheet = workbook.add_worksheet()
# Some sample data for the table.
data = [
['KEY-1', 10000, 5000, 8000, 6000],
['KEY-2', 2000, 3000, 4000, 5000],
['KEY-3', 6000, 6000, 6500, 6000],
['KEY-4', 500, 300, 200, 700],
]
# Set the columns widths.
worksheet.set_column('A:E', 10)
# Add a table to the worksheet.
worksheet.add_table('A1:E5')
# Write the data to the table.
for row_num, row_data in enumerate(data):
for col_num, col_data in enumerate(row_data):
if col_num == 0:
worksheet.write_url(row_num + 1, col_num,
f'https://jiraserver/browse/{col_data}',
None, col_data)
else:
worksheet.write(row_num + 1, col_num, col_data)
workbook.close()
Output:
I prefer using the real hyperlink instead of the HYPERLINK formula because I think it looks less confusing to the end user.
Related
OK so I have been banging my head at this problem for way too long by now.
I want to sync stock levels of a product that is tracked with lots between the webshop and Odoo. For this reason I need to be able to make a stock adjustment of a lot via the API (in this case in python).
I have found this possible way of doing it:
odoo(
'stock.move',
'create',
[{
"name": "Webshop stock adjustment",
"company_id": 1,
"location_id": 8, # warehouse
"location_dest_id": 14, # virtual location
"product_id": batch["product_id"][0],
"product_uom": 1,
"lot_ids": [batch["id"]], # I am searching for the id by the lot name beforehand
"product_uom_qty": 1,
"quantity_done": 1,
"state": "done"
}]
)
This, however, results in two moves! One move which has the correct lot, and another one without a specified lot. The latter move is faulty of course, as the product is tracked with lots. This results in a fault lot entry, where I can't change the quantity by hand, as the field is invalid. Worse, it results in wrong stock levels.
You can see the problematic bookings here
I have tried to just create a stock.move.line, like so:
odoo(
'stock.move.line',
'create',
[{
"company_id": 1,
"display_name": "Webshop adjustment", # does not appear
"location_id": location_id,
"location_dest_id": location_dest_id,
"product_id": batch["product_id"][0],
"product_uom_id": 1,
"lot_id": batch["id"],
"product_uom_qty": quantity,
"qty_done": quantity,
"state": "done" # has no effect
}]
)
However that results in a line with no effect: Line
I have also tried to find the stock adjustment wizard, but the only one I found in the code as opposed to the UI, doesn't have a field for lots..
I'd be happy for any input on how to solve this problem!
Meanwhile I managed to solve this problem reliably. I needed to implement a function for that, rather than mucking around with the external API.
The function here is expecting vals with the format below. It reduces whatever batch needs to go first.
[{
'sku': sku,
'qty': quantity
},]
#api.model
def reduce_lots(self, vals):
log(vals)
for product_req in vals:
product = self.env['product.product'].search(
[['default_code','=', product_req['sku']]]
)
if len(product) == 0:
continue
lots = self.env['stock.quant'].search(
['&',('product_id', '=', product[0]['id']),('on_hand', '=', True)],
order='removal_date asc'
)
move = self.env['stock.move'].create({
'name': product_req['order'],
'location_id': 8, # Our Warehouse
'location_dest_id': 14, # Virtual Location, Customer. If you need to increase stock, reverse the two numbers.
'product_id': product.id,
'product_uom': product.uom_id.id,
'product_uom_qty': product_req['qty'],
})
move._action_confirm()
move._action_assign()
product_req['lots'] = []
for line in move.move_line_ids:
line.write({'qty_done': line['product_uom_qty']})
product_req['lots'].append({
'_qty': line['product_uom_qty'],
'_lot_id': line.lot_id.name,
'_best_before': line.lot_id.removal_date
})
move._action_done()
return vals
I'm facing a strange problem maybe related with some cache that I cannot find.
I have the following Models:
class Incubadores(models.Model):
incubador = models.CharField(max_length=10, primary_key=True)
posicion = models.CharField(max_length=10)
class Tareas(TimeStampedModel):
priority = models.CharField(max_length=20, choices=PRIORITIES, default='normal')
incubador = models.ForeignKey(Incubadores, on_delete=models.CASCADE, null=True, db_column='incubador')
info = JSONField(null=True)
datos = JSONField(null=True)
class Meta:
ordering = ('priority','modified','created')
I previously didn't have the argument db_column, so the Postgres column for that field was incubador_id
I used the argument db_column to change the name of the column, and then I run python manage.py makemgrations and python manage.py migrate, but I'm still getting the column as incubadores_id whenever I perform a query such as:
>>> tareas = Tareas.objects.all().values()
>>> print(tareas)
<QuerySet [{'info': None, 'modified': datetime.datetime(2019, 11, 1, 15, 24, 58, 743803, tzinfo=<UTC>), 'created': datetime.datetime(2019, 11, 1, 15, 24, 58, 743803, tzinfo=<UTC>), 'datos': None, 'priority': 'normal', 'incubador_id': 'I1.1', 'id': 24}, {'info': None, 'modified': datetime.datetime(2019, 11, 1, 15, 25, 25, 49950, tzinfo=<UTC>), 'created': datetime.datetime(2019, 11, 1, 15, 25, 25, 49950, tzinfo=<UTC>), 'datos': None, 'priority': 'normal', 'incubador_id': 'I1.1', 'id': 25}]>
I need to modify this column name because I'm having other issues with Serializers. So the change is necessary.
If I perform the same query in other Models where I've also changed the name of the default field. The problem is exactly the same.
It happens both on the shell and on the code.
I've tried with different queries, to make sure it's not related to Django lazy query system, but the problem is the same. I've also tried executing django.db.connection.close().
If I do a direct SQL query to PostgreSQL, it cannot find incubador_id, but only incubador, which is correct.
Anyone has any idea of what can be happening? I've already been 2 days with this problem and I cannot find a reason :( It's a very basic operation.
Thanks!
This answer will explain why this is happening.
Django's built-in serializers don't have this issue, but probably won't yield exactly what you're looking for:
>>> from django.core import serializers
>>> serializers.serialize("json", Tareas.objects.all())
'[{"model": "inc.tareas", "pk": 1, "fields": {"priority": "normal", "incubador": "test-i"}}]'
You could use the fields attribute here, which seems like it would give you what you're looking for.
You don't specify what your "other issues with Serializers" are, but my suggestion would be to write custom serialization code. Relying on something like .values() or even serializers.serialize() is a bit too implicit for me; writing explicit serialization code makes it less likely you'll accidentally break a contract with a consumer of your serialized data if this model changes.
Note: Please try to make the example you provide minimal and reproducible. I removed some fields to make this work with stock Django, which is why the serialized value is missing fields; the _id issue was still present without the third-party apps you're using, and was resolved with serializers. This also isn't specific to PG; it happens in sqlite as well.
I'm using #Query from the spring data package and I want to query on the last element of an array in a document.
For example the data structure could be like this:
{
name : 'John',
scores: [10, 12, 14, 16]
},
{
name : 'Mary',
scores: [78, 20, 14]
},
So I've built a query, however it is complaining that "error message 'unknown operator: $slice' on server"
The $slice part of the query, when run separately, is fine:
db.getCollection('users').find({}, {scores: { $slice: -1 })
However as soon as I combine it with a more complex check, it gives the error as mentioned.
db.getCollection('users').find{{"$and":[{ } , {"scores" : { "$slice" : -1}} ,{"scores": "16"}]})
This query would return the list of users who had a last score of 16, in my example John would be returned but not Mary.
I've put it into a standard mongo query (to debug things), however ideally I need it to go into a spring-data #query construct - they should be fairly similar.
Is there anyway of doing this, without resorting to hand-cranked java calls? I don't see much documentation for #Query, other than it takes standard queries.
As commented with the link post, that refers to aggregate, how does that work with #Query, plus one of the main answers uses $where, this inefficient.
The general way forward with the problem is unfortunately the data, although #Veeram's response is correct, it will mean that you do not hit indexes. This is an issue where you've got very large data sets of course and you will see ever decreasing return times. It's something $where, $arrayElemAt cannot help you with. They have to pre-process the data and that means a full collection scan. We analysed several queries with these constructs and they involved a "COLSCAN".
The solution is ideally to create a field that contains the last item, for instance:
{
name : 'John',
scores: [10, 12, 14, 16],
lastScore: 16
},
{
name : 'Mary',
scores: [78, 20, 14],
lastScore: 14
}
You could create a listener to maintain this as follows:
#Component
public class ScoreListener extends AbstractMongoEventListener<Scores>
You then get the ability to sniff the data and make any updates:
#Override
public void onBeforeConvert(BeforeConvertEvent<Scores> event) {
// process any score and set lastScore
}
Don't forget to update your indexes (!):
#CompoundIndex(name = "lastScore", def = "{"
+ "'lastScore': 1"
+ " }")
Although this does contain a disadvantage of a slight duplication of data, in current Mongo (3.4) this really is the only way of doing this AND to include indexes in the search mechanism. The speed differences were dramatic, from nearly a minute response time down to milliseconds.
In Mongo 3.6 there may be better ways for doing that, however we are fixed on this version, so this has to be our solution.
Is it possible to pass from server select values and labels.
It would be nice to be able pass to pass id's and string representations to select2.
For example, I want to pass [[1, 2, 3, 4, 5], ["name1", "name2" ..., "name5"].
Just like in the client side setup of yadcf you can send from server an array of objects with value / label
See in Server side source example inspect the dev tools -> Network - > entrys_table_server_side_source, look at the yadcf_data_0
yadcf_data_0: [{value: "Trident", label: "Trident Eng'"},
{value: "Tasman", label: "Tasman Eng'"},…]
I'm using Pandas, and making a HDFStore object. I calculate 500 columns of data, and write it to a table format HDFStore object. Then I close the file, delete the data from memory, do the next 500 columns (labelled by an increasing integer), open up the store, and try to append the new columns. However, it doesn't like this. It gives me an error
invalid combinate of [non_index_axes] on appending data [[(1, [500, 501, 502, ...])]] vs current table [[(1, [0, 1, 2, ...])]]
I'm assuming it only allows appending of more rows not columns. So how do I add more columns?
HDF5 files have a fixed structure, and you cannot easily add a column , but the workaround is to concatenate different DFs and the re-write them into the HDF5 file.
hdf5_files = ['data1.h5', 'data2.h5', 'data3.h5']
df_list = []
for file in hdf5_files:
df = pd.read_hdf(file)
df_list.append(df)
result = pd.concat(df_list)
# You can now use the result DataFrame to access all of the data from the HDF5 files
Does this solve your problem ?
Remind HDF5 is not designed for efficient append operations, you should consider database system if you need to frequently add new columns to your data , imho.
You have kept your column titles in the code [1, 2, 3, ...] and trying to append a DataFrame with different columns [500, 501, 502, ...].