Hi, are there any differences between "\/\/" and "//" in the links in schema? - schema

are there any differences between "\ / \ /" and "/ /" in the links in schema?
<script type="application/ld+json">
{
"#context": "http:\/\/schema.org",
"#type": "ContactPage",
"url": "https:\/\/example.com\/contact"
}
</script>
or just
<script type="application/ld+json">
{
"#context": "http://schema.org",
"#type": "ContactPage",
"url": "https://example.com/contact"
}
</script>

No \ escape special characters in strings so a new line can be represented as \n or \ for one backslash, but / doesn't need to be escaped in this case so both phrases are the same

Related

Should the schema.org type Webpage show up as rich result?

I'm having trouble getting my paywalled content indexed by Google. I want try and see if adding schema to the pages to mark them as paywalled content will fix this problem.
Therefor I add this piece of code to my pages;
<script type="application/ld+json">
{
"#context": "https://schema.org",
"#type": "WebPage",
"name": "Page title",
"description": "Page description.",
"publisher": {
"#type": "EducationalOrganization",
"name": "Name Educational Organization"
},
"isAccessibleForFree": "False"
}
</script>
I tested this via: https://search.google.com/test/rich-results but the result is "No items detected". Is this how it should be because I used the WebPage type? Or is there something I should do to get this to work?

How to create contact point in grafana using API?

I am trying to create a contact point in grafana for pagerduty using grafana API.
Tried with the help of these URLS: AlertProvisioning HTTP_API
API call reference
YAML reference of data changed to JSON and tried this way, the YAML reference
But getting error as
{"message":"invalid object specification: type should not be an empty string","traceID":"00000000000000000000000000000000"}
My API code below, replaced with dummy integration key for security.
curl -X POST --insecure -H "Authorization: Bearer XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" -H "Content-Type: application/json" -d '{
"contactPoints": [
{
"orgId": 1,
"name": "test1",
"receivers": [
{
"uid": "test1",
"type": "pagerduty",
"settings": {
"integrationKey": "XXXXXXXXXXXXXXXX",
"severity": "critical",
"class": "ping failure",
"component": "Grafana",
"group": "app-stack",
"summary": "{{ `{{ template \"default.message\" . }}` }}"
}
}
]
}
]
},
"overwrite": false
}' http://XXXXXXXXXXXXXXXX.us-east-2.elb.amazonaws.com/api/v1/provisioning/contact-points
I would recommend to enable Grafana swagger UI. You will see POST /api/v1/provisioning/contact-points model there:
Example:
{
"disableResolveMessage": false,
"name": "webhook_1",
"settings": {},
"type": "webhook",
"uid": "my_external_reference"
}

bash / grep: getting multiple matching elements from a json

In my BitBucket+Bamboo setup, I'm trying to get a list of email addresses of people having access to a particular repository. This is the output from the BitBucket API:
{
"size": 3,
"limit": 25,
"isLastPage": true,
"values": [
{
"user": {
"name": "name1",
"emailAddress": "name1.lastname1#domain.com",
"id": 1,
"displayName": "Name1 Lastname1",
"active": true,
"slug": "name1",
"type": "NORMAL",
"links": {
"self": [
{
"href": "https://bitbucket.com/stash/users/name1"
}
]
}
},
"permission": "REPO_WRITE"
},
{
"user": {
"name": "name2",
"emailAddress": "name2.lastname2#domain.com",
"id": 2,
"displayName": "Name2 Lastname2",
"active": true,
"slug": "name2",
"type": "NORMAL",
"links": {
"self": [
{
"href": "https://bitbucket.com/stash/users/name2"
}
]
}
},
"permission": "REPO_WRITE"
},
{
"user": {
"name": "name3",
"emailAddress": "name3.lastname3#domain.com",
"id": 3,
"displayName": "Name3 Lastname3",
"active": true,
"slug": "name3",
"type": "NORMAL",
"links": {
"self": [
{
"href": "https://bitbucket.com/stash/users/name3"
}
]
}
},
"permission": "REPO_WRITE"
}
],
"start": 0
}
is there an easy way to, say, put all 3 email addresses into an array or a coma-separated variable within a bash script? I tried using grep and splitting the API output somehow (e.g. by 'permission'), but no luck so far. Let me note that I may be forced to use standard tools like grep, sed or awk, meaning I may not be able to use tools like jq (to process json in bash) since I cannot really temper with available build agents.
Any help would be much appreciated!
Consider using JQ (or another JSON query tool). It will handle any valid Json, even one that is not pretty-printed or formatted in a specific way. Ca be compined with readarray to build the array in bash.
readarray -t emails <<< "$(jq -r '.values[].user.emailAddress' < file)"
Will produce an array 'emails'
declare -p emails
declare -a emails=([0]=$'name1.lastname1#domain.com' [1]=$'name2.lastname2#domain.com' [2]=$'name3.lastname3#domain.com')
Note 2020-07-22: Added '-t' to strip trailing new lines from result array
Assuming your input is always that regular, this will work using any awk in any shell on every UNIX box:
$ awk -F'"' '$2=="emailAddress"{addrs=addrs sep $4; sep=","} END{print addrs}' file
name1.lastname1#domain.com,name2.lastname2#domain.com,name3.lastname3#domain.com
Save the output in a variable or a file as you see fit, e.g.:
$ var=$(awk -F'"' '$2=="emailAddress"{addrs=addrs sep $4; sep=","} END{print addrs}' file)
$ echo "$var"
name1.lastname1#domain.com,name2.lastname2#domain.com,name3.lastname3#domain.com
Take a look on the python:
You can access directly to your api like this:
import urllib.request
import json
with urllib.request.urlopen('http://your/api') as url:
data = json.loads(url.read().decode())
or as an example with the local file with the same data as you provided:
import json
with open('./response.json') as f:
data = json.load(f)
result = {}
for x in data['values']:
node = x['user']
result[node['emailAddress']] = x['permission']
result is {'name1.lastname1#domain.com': 'REPO_WRITE', 'name2.lastname2#domain.com': 'REPO_WRITE', 'name3.lastname3#domain.com': 'REPO_WRITE'}
$ grep -oP '(?<="emailAddress": ).*' file |
tr -d '",' |
paste -sd,
name1.lastname1#domain.com,name2.lastname2#domain.com,name3.lastname3#domain.com
or
$ grep '"emailAddress":' file |
cut -d: -f2 |
tr -d '", ' |
paste -sd,

Google structured data for Events

I want to show events on Google, so I have created this JSON-LD:
<script type="application/ld+json">
[{
"#context": "http://schema.org",
"#type": "TestEvent",
"name": "Rohit Patil Event",
"startDate": "2014-10-10T19:30",
"location": {
"#type": "Place",
"name": "Warner Theatre",
"address": "Washington, DC"
},
"offers": {
"#type": "Offer",
"url": "https://www.etix.com/ticket/1771656"
}
}
]
</script>
I have tested this in testing tool and having 0 error.
How can I use this code to work on Google? What is the best file to place this code?
You add the script element to the page about this event, either in the head or in the body.
It becomes clearer if you use Microdata or RDFa instead of JSON-LD, because there you have to mark up your existing content. With JSON-LD you duplicate the content, but it still should be placed where the actual/visible content is.
You need to embed this into a webpage whose content is about the same event. You can find more information at:
https://developers.google.com/search/docs/guides/intro-structured-data#markup-formats-and-placement
JSON-LD is the recommended format by Google at this point btw.
In this case "the best file" to put that code is in the Rohit Patil Event file. It should go somewhere between the head section of that webpage. I.e.
<html>
<head>
..your json-ld code..
</head>
</html>

Why rebol fails with stackoverflow api?

If I type in a browser (see https://stackapps.com/questions/2/getting-started-with-the-api) :
http://api.stackoverflow.com/1.0/stats
it returns
{
"statistics": [
{
"total_questions": 800830,
"total_unanswered": 131356,
"total_accepted": 500653,
"total_answers": 2158752,
"total_comments": 3125048,
"total_votes": 7601765,
"total_badges": 798091,
"total_users": 289282,
"questions_per_minute": 1.50,
"answers_per_minute": 3.12,
"badges_per_minute": 1.20,
"views_per_day": 455215.44,
"api_version": {
"version": "1.0",
"revision": "2010.7.17.1"
},
"site": {
"name": "Stack Overflow",
"logo_url": "http://sstatic.net/stackoverflow/img/logo.png",
"api_endpoint": "http://api.stackoverflow.com",
"site_url": "http://stackoverflow.com",
"description": "Q&A for professional and enthusiast programmers",
"icon_url": "http://sstatic.net/stackoverflow/apple-touch-icon.png",
"state": "normal",
"styling": {
"link_color": "#0077CC",
"tag_foreground_color": "#3E6D8E",
"tag_background_color": "#E0EAF1"
}
}
}
]
}
If I type this in rebol console:
read http://api.stackoverflow.com/1.0/stats
It returns some weird binary chars.
probe load to-string gunzip to-string read/binary http://api.stackoverflow.com/1.0/stats
connecting to: api.stackoverflow.com
{
"statistics": [
{
"total_questions": 801559,
"total_unanswered": 131473,
"total_accepted": 501129,
"total_answers": 2160171,
"total_comments": 3127759,
"total_votes": 7607247,
"total_badges": 798608,
"total_users": 289555,
"questions_per_minute": 0.93,
"answers_per_minute": 1.83,
"badges_per_minute": 0.73,
"views_per_day": 455579.60,
"api_version": {
"version": "1.0",
"revision": "2010.7.17.2"
},
"site": {
"name": "Stack Overflow",
"logo_url": "http://sstatic.net/stackoverflow/img/logo.png",
"api_endpoint": "http://api.stackoverflow.com",
"site_url": "http://stackoverflow.com",
"description": "Q&A for professional and enthusiast programmers",
"icon_url": "http://sstatic.net/stackoverflow/apple-touch-icon.png",
"state": "normal",
"styling": {
"link_color": "#0077CC",
"tag_foreground_color": "#3E6D8E",
"tag_background_color": "#E0EAF1"
}
}
}
]
}
REBOL is ignoring the Content-Encoding: gzip response header, which stackoverflow seems adamant to use, regardless of what you put in your Accept-Encoding: header. On Unix, wget and curl have the same problem, but I can do this to see the intended content:
curl http://api.stackoverflow.com/1.0/stats | zcat
Does REBOL have a way to uncompress gzip content?
Based on http://www.mail-archive.com/rebol-bounce#rebol.com/msg03531.html
>> do http://www.rebol.org/download-a-script.r?script-name=gunzip.r
connecting to: www.rebol.org
Script: "gunzip" (30-Dec-2004)
>> print to-string gunzip read http://api.stackoverflow.com/1.0/stats
connecting to: api.stackoverflow.com
{
"statistics": [
{
"total_questions": 801316,
"total_unanswered": 131450,
"total_accept���������������������accept������E531450,
"tocomment312672�vote7605283badge7984187946531450,
tal_unans_per_minutet.0531450,
....
almost works :)
so the core code is all there, just not exposed properly... it's a pity indeed...
but stackoverflow is not nice either not being complaint to the http specs and ignoring the accept-encoding header...