Use internationalization in my manifest.json of search provider webextension - api

I use internationalization in mymanifest.json of search provider webextension for Firefox, I have not installed a language pack and set intl.locale.requested to "de" .My issue is that some text are showing in "de" language and some are showing in default english language.I can't figure out reasons for the issue.
Below is my manifest.json
I have used installed the language pack for de in Firefox and its fixe everything but I want to know the cause of the issue.
manifest.json
{
"manifest_version": 2,
"name": "__MSG_extensionName__",
"description": "__MSG_extensionDescription__",
"version": "1.1.3",
"applications": {
"gecko": {
"strict_min_version": "57.0"
}
},
"icons": {
"64": "icons/my-icon.png"
},
"permissions": [
"activeTab",
],
"chrome_settings_overrides": {
"search_provider": {
"name": "__MSG_searchEngineName__",
"search_url": "https://www.example.com/do/dsearch?query={searchTerms}&language=__MSG_extensionUrlLanguage__",
"favicon_url": "https://www.example.com/favicon.ico",
"is_default": true
}
},
"background": {
"scripts": ["js/background.js"]
},
"content_scripts": [
{
"matches": ["https://*.example.com/*"],
"css": ["css/content.css"],
"run_at": "document_start"
},
{
"matches": ["https://*.mydomain.com/*"],
"js": ["js/content.js", "js/success.min.js"]
}
],
"default_locale": "en"
}
message.json for de locale
{
"extensionName": {
"message": "example.com — Datenschutz-Suchmaschine",
"description": "Name of the extension."
},
"extensionDescription": {
"message": "Hol dir deine Online-Privatsphäre zurück, mach domain.com zu deiner Suchmaschine.",
"description": "Description of the extension."
},
"extensionUrlLanguage": {
"message": "deutsch",
"description": "Search Engine Language"
},
"searchEngineName": {
"message": "example.com - Deutsch",
"description": "Search Engine Name"
}
}
expected output in addons list extension shows up with text in de language and perform the search with URL https://www.example.com/do/dsearch?query={searchTerms}&language=deutsch
actual results:
expected output in addons list extension shows up with text in de language and perform the search with URL ttps://www.example.com/do/dsearch?query={searchTerms}&language=english

Related

Change syntax highlighting of embedded code based on a previous line's keyword?

I'm trying to write a TextMate grammar for a VS Code language extension. Take the following example
(lang=css attribute2=something-else)
"""
.css-class {
background: gray;
}
"""
The (...) part is an "attributes" section, and the """ ... """ is a code section. I'm trying to highlight everything in the code section according to the lang attribute.
The problem is, they are two distinct sections where one might be present without the other in other parts of the file. For example, you can have attributes without the code block.
In the grammars section of package.json I have
"embeddedLanguages": {
"meta.embedded.block.css": "css",
"meta.embedded.block.javascript": "javascript"
}
In the tmLanguage.json file I have both patterns in the repository property.
"attributes": {
"begin": "\\(",
"end": "\\)",
"captures": {
"0": {
"name": "punctuation.definition.annotation punctuation.section.group punctuation.section.parens"
}
},
"patterns": [
{
"begin": "[a-zA-Z_][a-zA-Z0-9_\\.-]*",
"beginCaptures": {
"0": {
"name": "entity.other.attribute-name"
}
},
"end": "(?=\\s*+[^=\\s])",
"patterns": [
{
"begin": "=",
"beginCaptures": {
"0": {
"name": "punctuation.separator.key-value"
}
},
"end": "(?<=[^\\s=])(?!\\s*=)|(?=/?>)",
"patterns": [
{
"match": "([^0-9-.\\s='\"][^\\s='\")]*)",
"name": "string.unquoted.html"
},
{
"match": "=",
"name": "invalid.illegal.unexpected-equals-sign"
},
{
"include": "#strings"
},
{
"include": "#number"
}
]
}
]
}
]
},
"fenced-code": {
"begin": "\\(.*lang=(css|javascript).*\\)\\s*(\"\"\")",
"beginCaptures": {
"1": {
"name": "string.quoted.triple"
}
},
"end": "\"\"\"",
"endCaptures": {
"0": {
"name": "string.quoted.triple"
}
},
"contentName": "meta.embedded.block.$1",
"patterns": [
{
"include": "source.css"
}
]
}
I have a third pattern not shown where I'm using these together by including them in its patterns array. They seem to be mutually exclusive though. I can have the attributes, and I can have a code block if I start the pattern at """, but if I start the code pattern with \\(.*lang=(css|javascript).*\\)\\s*(\"\"\") to capture the lang attribute, the attributes stop getting highlighting.
Is this even possible? I've never worked with TextMate grammars outside of a VS Code theme and VS Code doesn't seem to have deep documentation on the more "advanced" (I guess) things like this.
I tried using VS Code's HTML grammar for its uses of embedded code, but I don't think the HTML one needs to swap syntax based on something in a previous line.
Update
The fenced-code pattern I have below allows the attributes pattern highlighting while also having the code block with dynamic contentName property, i.e. "meta.embedded.block.$2" becomes meta.embedded.block.css when "css" is found in attributes.
"fenced-code": {
"begin": "(\\(.*lang=(css|javascript).*\\))\\s*(\"\"\")",
"beginCaptures": {
"1": {
"patterns": [
{
"include": "#attributes"
}
]
},
"3": {
"name": "string.quoted.triple"
}
},
"end": "\"\"\"",
"endCaptures": {
"0": {
"name": "string.quoted.triple"
}
},
"contentName": "meta.embedded.block.$2",
"patterns": [
{
"include": "source.css"
},
{
"include": "source.js"
}
]
}
However, there's two things wrong so far.
It only works when the opening """ is on the same line as the attributes section
fenced-code's patterns array doesn't seem to allow a dynamic include, i.e. "patterns": [{"include": "source.$2}]. I'm not sure if including the different languages as I did above will work

How to display data in table using Keystone6 component in custom page so it looks same?

I have build custom page in Keystone6 using this docs
Now I am getting data from GraphQL query.
{
"members": [
{
"__typename": "Member",
"id": "ckwluj7jd1675l4l8e9yfcwtt",
"name": "User 2",
"companyName": "company1",
},
{
"__typename": "Member",
"id": "ckwltsw620162l4l88g8ox4zo",
"name": "User 1",
"companyName": "company2",
},
{
"__typename": "Member",
"id": "ckwm061f8436554l8ab4ic3dsd5o",
"name": "User 3",
"companyName": "",
}
]
}
Now I am trying to display it on custom page but I am not sure how to use Keystone6 admin component to display data.
Ronald here from the Keystone Team.
At the moment there’s no way to do this using official Keystone components. We’re working on a next-gen Admin UI in 2022 that will make it easier for you to achieve this.

Retrieve Wikimedia Commons Category of a Wikipedia page using the Wikimedia API

I am trying to use the Wikimedia Api to get the corresponding Wikimedia Commons category to a specific Wikipedia page. I assume that it is possible as most Wikipedia pages include an "in other projects" - section in the sidebar which has a link that redirects to the Commons Category (for example: https://de.wikipedia.org/wiki/Albert_Einstein)
Thanks in advance.
You can do it in two API calls, the first call to German Wikipedia gets you the Wikidata Qid:
https://de.wikipedia.org/w/api.php?action=query&format=json&prop=wbentityusage&titles=Albert%20Einstein&wbeuprop=&wbeuaspect=
Which returns:
{
"batchcomplete": "",
"query": {
"pages": {
"1278360": {
"pageid": 1278360,
"ns": 0,
"title": "Albert Einstein",
"wbentityusage": {
"Q937": {
"aspects": [
"S",
"T",
"C.P227",
"C.P214",
"C.P244"
]
}
}
}
}
}
}
Then you can use the Wikidata API to get the name of the Commons category: https://www.wikidata.org/w/api.php?action=wbgetclaims&format=json&entity=Q937&property=P373
Which returns:
{
"claims": {
"P373": [
{
"mainsnak": {
"snaktype": "value",
"property": "P373",
"hash": "be154a8a3dfc826844ceb5a62389857a65ff1e4e",
"datavalue": {
"value": "Albert Einstein",
"type": "string"
},
"datatype": "string"
},
"type": "statement",
"id": "q937$2F332903-133D-4CA0-AD24-8B4292C2BF89",
"rank": "normal"
}
]
}
}
The value in datavalue is the name of the category. You get the full URL by just prepending https://commons.wikimedia.org/wiki/Category:
https://commons.wikimedia.org/wiki/Category:Albert Einstein

Importing Data to Contentful programatically from a json file

I am trying to import some data programatically into contentful:
I am following the docs here
And running the command inside my integrated terminal
contentful space import --config config.json
Where the config file is
{
"spaceId": "abc123",
"managementToken": "112323132321adfWWExample",
"contentFile": "./dataToImport.json"
}
And the dataToImport.json file is
{
"data": [
{
"address": "11234 New York City"
},
{
"address": "1212 New York City"
}
]
}
The thing is I don't understand what format my dataToImport.json should be and what is missing inside this file or in my config file so that the array of addresses from the .json file get added as new entries to an already created content model inside the Contentful UI show in the screenshot below
I am not specifying the content model for the data to go into so I believe that is one issue, and I don't know how I do that. An example or repo would help me out greatly
The types of data you can import are listed : in their documentation
your json top level should say "entries" and not data, if new content of a content type is what you would like to import.
This is an example of a blog post as per content model of the tutorial they provide.
The only thing i didn't work out yet is where the user id is :D so i substituted for one of the content type 'person' also provided in their tutorial (I think it's called Gatsby Starter)
{"entries": [
{
"sys": {
"space": {
"sys": {
"type": "Link",
"linkType": "Space",
"id": "theSpaceIdToReceiveYourImport"
}
},
"type": "Entry",
"createdAt": "2019-04-17T00:56:24.722Z",
"updatedAt": "2019-04-27T09:11:56.769Z",
"environment": {
"sys": {
"id": "master",
"type": "Link",
"linkType": "Environment"
}
},
"publishedVersion": 149, -- these are not compulsory, you can skip
"publishedAt": "2019-04-27T09:11:56.769Z", -- you can skip
"firstPublishedAt": "2019-04-17T00:56:28.525Z", -- you can skip
"publishedCounter": 3, -- you can skip
"version": 150,
"publishedBy": { -- this is an example of a linked content
"sys": {
"type": "Link",
"linkType": "person",
"id": "personId"
}
},
"contentType": {
"sys": {
"type": "Link",
"linkType": "ContentType",
"id": "blogPost" -- here should be your content type 'RealtorProperties'
}
}
},
"fields": { -- here should go your content type fields, i can't see it in your post
"title": {
"en-US": "Test 1"
},
"slug": {
"en-US": "Test-1"
},
"description": {
"en-US": "some description"
},
"body": {
"en-US": "some body..."
},
"publishDate": {
"en-US": "2016-12-19"
},
"heroImage": { -- another example of a linked content
"en-US": {
"sys": {
"type": "Link",
"linkType": "Asset",
"id": "idOfTHisImage"
}
}
}
}
},
--another entry, ...]}
Have a look at this repo. I am also trying to figure this out. Looks like there's quite a lot of fields that need to be included in the json file. I was hoping there'd be a simple solution but it seems you (me too actually) will need to create scripts to "convert" your json file to data contentful can read and import.
I'll let you know if I find anything better.

Finding similar documents with Elasticsearch

I'm using ElasticSearch to develop service that will store uploaded files or web pages as attachment (file is one field in document). This part works fine as I can search these files using like_text as input. However, the second part of this service should compare the file that is just uploaded with the existing files in order to find duplicates or very similar files, so it doesn't recommend users same files or same web pages. The problem is that I can't get expected results for documents that are the same. Similarity between same files varies, but is never more then 0.4. Even worse, sometimes I get better scores for files which are not the same then for two exactly the same files. The java code give bellow gives me always the set of documents which are in the same order, regardless of the input. It looks like like_text extracted from uploaded file is always the same.
String mapping = copyToStringFromClasspath("/org/prosolo/services/indexing/documents- mapping.json");
byte[] txt = org.elasticsearch.common.io.Streams.copyToByteArray(file);
Client client = ElasticSearchFactory.getClient();
client.admin().indices().putMapping(putMappingRequest(indexName).type(indexType).source(mapping)).actionGet();
IndexResponse iResponse = client.index(indexRequest(indexName).type(indexType)
.source(jsonBuilder()
.startObject()
.field("file", txt)
.field("title",title)
.field("visibility",visibilityType.name().toLowerCase())
.field("ownerId",ownerId)
.field("description",description)
.field("contentType",DocumentType.DOCUMENT.name().toLowerCase())
.field("dateCreated",dateCreated)
.field("url",link)
.field("relatedToType",relatedToType)
.field("relatedToId",relatedToId)
.endObject()))
.actionGet();
client.admin().indices().refresh(refreshRequest()).actionGet();
MoreLikeThisRequestBuilder mltRequestBuilder=new MoreLikeThisRequestBuilder(client, ESIndexNames.INDEX_DOCUMENTS, ESIndexTypes.DOCUMENT, iResponse.getId());
mltRequestBuilder.setField("file");
SearchResponse response = client.moreLikeThis(mltRequestBuilder.request()).actionGet();
SearchHits searchHits= response.getHits();
System.out.println("getTotalHits:"+searchHits.getTotalHits());
Iterator<SearchHit> hitsIter=searchHits.iterator();
while(hitsIter.hasNext()){
SearchHit searchHit=hitsIter.next();
System.out.println("FOUND DOCUMENT:"+searchHit.getId()+" title:"+searchHit.getSource().get("title")+" score:"+searchHit.score());
}
And the query from browser which looks like:
http://localhost:9200/documents/document/m2HZM3hXS1KFHOwvGY1pVQ/_mlt?mlt_fields=file&min_doc_freq=1
Gives me results:
{"took":120,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},
"hits":{"total":4,
"max_score":0.41059873,
"hits":
[{"_index":"documents","_type":"document",
"_id":"gIe6NDEWRXWTMi4kMPRbiQ",
"_score":0.41059873,
"_source" :
{"file":"PCFET0NUWVBFIGh..._skiping_the_file_content_here...",
"title":"Univariate Analysis",
"visibility":"public",
"description":"Univariate Analysis Simple Tools for Description ",
"contentType":"webpage",
"dateCreated":"null",
"url":"http://www.slideshare.net/christineshearer/univariate-analysis"}}
This is exactly the same web page, so I'm expecting the score to be 1.0, not 0.41 as there is not difference between two documents except in _id. The results are even worse with files.
Mapping I was using is:
{
"document":{
"properties":{
"title":{
"type":"string",
"store":true
},
"description":{
"type":"string",
"store":"yes"
},
"contentType":{
"type":"string",
"store":"yes"
},
"dateCreated":{
"store":"yes",
"type":"date"
},
"url":{
"store":"yes",
"type":"string"
},
"visibility": {
"store":"yes",
"type":"string"
},
"ownerId": {
"type": "long",
"store":"yes"
},
"relatedToType": {
"type": "string",
"store":"yes"
},
"relatedToId": {
"type": "long",
"store":"yes"
},
"file":{
"path": "full",
"type":"attachment",
"fields":{
"author": {
"type": "string"
},
"title": {
"store": true,
"type": "string"
},
"keywords": {
"type": "string"
},
"file": {
"store": true,
"term_vector": "with_positions_offsets",
"type": "string"
},
"name": {
"type": "string"
},
"content_length": {
"type": "integer"
},
"date": {
"format": "dateOptionalTime",
"type": "date"
},
"content_type": {
"type": "string"
}
} } } } }
Does anyone have an idea what could be wrong here?
Thanks