Best Postgres/DB format for storing an array of strings with a boolean attached to them in Django? - sql

I'm storing an array of URL links in my Postgres database like this:
urls = ArrayField(
models.CharField(max_length=250, blank=False)
)
But I want to track if a URL has been visited or not, as a boolean. What is the best way to do this?

You can use django.contrib.postgres.fields.JSONField which can store data as JSON and is PostgreSQL specific field. So you can put in it dict with url and visited keys in it.
from django.core.serializers.json import DjangoJSONEncoder
...
urls = JSONField(encoder=DjangoJSONEncoder, default=dict)
https://docs.djangoproject.com/en/2.0/ref/contrib/postgres/fields/#django.contrib.postgres.fields.JSONField

Related

How to update values of existing keys or insert new key -values to a mongo document based on a python dictionary using pymongo?

I have a dictionary object in python,
dic = {"_id":1,
"A":123,
"B":234,
"C":222}
and a collection in mongo that has stored the same ID document in mongo which looks like,
mongodoc= {"_id":1,"A":233,"B":234,"D":999}
I want to update the documents in mongo directly based on the dictionary values using pymongo,
mongodoc={"_id":1,"A":123,"B":234,"D":999,"C":222}
If the keys matches between the dictionary and mongo document, update the values else insert new key value pairs.
I tried using
collection.update{"_id:1",{"set"}}
but this needs to be given a specific keys which doesn't work for my problem statement. Not sure how to proceed.
Kindly help.
The $set operator takes a dictionary which will achieve what you are looking to do:
collection.update_one({'_id': dic['_id']}, {'$set': dic})

how to store special characters in json column in mysql

Im am trying to save some special characters in a json column of a table. but this is actually not working
this is how it is stored
{"district":"\u099a\u09be\u0981\u09a6\u09aa\u09c1\u09b0",
"sub_district":"\u099a\u09be\u0981\u09a6\u09aa\u09c1\u09b0"}
i am storing my input values as below..i am using laravel
present = array('district'=>$request->district,'sub_district'=>$request->sub_districtce);
$card->present_address = json_encode($present);
and while searching for a string in that json object, i am using a query below
$allowances = Card::SELECT('id','name','nid','village')
->where(DB::raw("json_extract((present_address), '$.district')"), চাদপুর)
->get();
can anybody help me on this situation where i can store special characters/unicodes in that json object.?
It is perfectly normal. As soon as you will do json_decode(), you will get back your expected values:
array (
'district' => 'চাঁদপুর',
'sub_district' => 'চাঁদপুর',
)

Django ImageFieldFile set to <ImageFieldFile: null>. How to change it to blank/empty, like other entries in the table?

I have a FileField in a Django Model. Some entries in this FileField are set to <ImageFieldFile: null>.
The rest of the Model table are blank or actual images. I need to clean this table so that these erroraneous entries are also changed to blank, as this is not acceptable in my DRF API.
Get all the image instance.
objs =myclassmodel.objects.all() //just assuming this
for obj in objs:
obj.image = ""
obj.save()
It will you if understand it right.

Splunk : formatting a csv file during indexing, values are being treated as new columns?

I am trying to create a new field during indexing however the fields become columns instead of values when i try to concat. What am i doing wrong ? I have looked in the docs and seems according ..
Would appreciate some help on this.
e.g.
.csv file
**Header1**, **Header2**
Value1 ,121244
transform.config
[test_transformstanza]
SOURCE_KEY = fields:Header1,Header2
REGEX =^(\w+\s+)(\d+)
FORMAT =
testresult::$1.$2
WRITE_META = true
fields.config
[testresult]
INDEXED = True
The regex is good, creates two groups from the data, but why is it creating a new field instead of assigning the value to result?. If i was to do ... testresult::$1 or testresult::$2 it works fine, but when concatenating it creates multiple headers with the value as headername. Is there an easier way to concat fields , e.g. if you have a csv file with header names can you just not refer to the header names? (i know how to do these using calculated fields but want to do it during indexing)
Thanks

Who parent if we to use rules in Scarpy?

rules = (
Rule(LinkExtractor(
restrict_xpaths='//need_data',
deny=deny_urls), callback='parse_info'),
Rule(LinkExtractor(allow=r'/need/', deny=deny_urls), follow=True),
)
rules to extract need URLs for scraping, right?
Can I in callback def get URL we move?
For example.
website - needdata.com
Rule(LinkExtractor(allow=r'/need/', deny=deny_urls), follow=True), to extract URL like needdata.com/need/1 , right?
Rule(LinkExtractor(
restrict_xpaths='//need_data',
deny=deny_urls), callback='parse_info'),
to extract urls from needdata.com/need/1 , for example it a table with people.
and then parse_info to scrape it. Right?
But I want to understand in parse_info who a parent?
If needdata.com/need/1 has needdata.com/people/1
I want to add to a file column parent and data will be needdata.com/need/1
How to do that? Thank you very much.
We want to use
lx = LinkExtractor(allow=(r'shop-online/',))
And then
for l in lx.extract_links(response):
# l.url - it our url
And then use
meta={'category': category}
The better decision I do not find.