Is there anyway to use JSON-LD Schema not inlined - seo

Is there anyway to use JSON-LD without including the script inline in the HTML, but still get Google (& other) spiders to find it? Looking around I've seen some conflicting information.
If this was the JSON-LD file:
<script type="application/ld+json">
{
"#context" : "http://schema.org",
"#type" : "WebSite",
"name" : "Example Site",
"alternateName" : "example",
"description" : "Welcome to this WebSite",
"headline" : "Welcome to Website",
"logo" : "https://example.com/public/images/logo.png",
"url" : "https://example.com/"
}
</script>
And I have this in the head of the HTML:
<script src="/public/json-ld.json" type="application/ld+json"></script>
EDIT: I've also tried:
<link href="/public/json-ld.json" rel="alternate" type="application/ld+" />
Google Spiders seem to miss it and so does the testing tool unless I point it directly at the file. I'm trying to work around unsafe-inline in the CSP. And the only thing I can find is this, which would work in Chrome but don't want to be firing console errors on every other browser. Plus, I just like the idea of Schema.org data being abstracted out of the page structure. Would adding the JSON-LD to the sitemap for Google Webmaster Tools help?
Apologies, total noob to JSON-lD and keep ending up in email documentation (this would be for a site) or old documentation.

True, it can not be made external and it is not supported inline, but you can still achieve what you want by injecting it into the DOM via a JavaScript file.
Note: I am using an array for neatness so I can segment all the structured data elements and add an infinite amount of them. I have a more complicated version of this code on my websites and actually have an external server side rendered file masquerading as a JavaScript file.
An yes Google search bot does understand it. May take days for it to register and using Webmaster tools to force a re-crawl does not seem to force a refresh of JSON-LD data - seems like you just have to wait.
var structuredData = {
schema: {
corporation: {
'#context': 'http://schema.org',
'#type': 'Corporation',
'name': 'Acme',
'url': 'https://acme.com',
'contactPoint':
{
'#type': 'ContactPoint',
'telephone': '+1-1234-567-890',
'contactType': 'customer service',
'areaServed': 'US'
}
},
service: {
'#context': 'http://schema.org/',
'#type': 'Service',
'name': 'Code optimization',
'serviceOutput' : 'Externalized json',
'description': 'Inline json to externalized json'
},
},
init: function () {
var g = [];
var sd = structuredData;
g.push(sd.schema.corporation);
g.push(sd.schema.service);
//etc.
var o = document.createElement('script');
o.type = 'application/ld+json';
o.innerHTML = JSON.stringify(g);
var d = document; (d.head || d.body).appendChild(o);
}
}
structuredData.init();

Related

How to create custom meta tags in Vue Meta 3?

I am using Vue Meta 3 to provide meta data to the website. The API for this is here
I do not understand how to provide custom meta tags( e.g Open Graph tags such as og:type). This is what I have tried to do in a component:
setup() {
useMeta({
title: "Homepage",
meta: [
{name: "hunky:dory", content: "website"}
],
description: "This is the homepage."
})
},
The HTML that gets outputted ends up like this:
<head>
<title>Homepage</title>
<meta name="description" content="This is the homepage.">
<meta name="meta" content="website"> <!-- should be <meta name="hunky:dory content="website"> -->
</head>
If I change the code to this:
setup() {
useMeta({
title: "Homepage",
"hunky:dory": [
{content: "website"}
],
description: "This is the homepage."
})
},
I get illegal HTML output:
<head>
<title>Homepage</title>
<meta name="description" content="This is the homepage.">
<hunky:dory>website</hunky:dory> <!-- total nonsense -->
</head>
How do I get the output to be:
<head>
<title>Homepage</title>
<meta name="description" content="This is the homepage.">
<meta name="hunky:dory" content="website">
</head>
There are 2 parts to getting og meta working -- I think I can help with part 1:
Correct vue-meta syntax
Server-Side Rendering (SSR)
Part 1: vue-meta for Vue 3
I wrote this with vue-class-component, and it seems to be working:
meta = setup(() => useMeta(computed(() => ({
title: this.event?.name ?? 'Event not found',
og: {
image: this.event?.bannerUrl ?? 'http://yourwebsite.com/images/default-banner.png'
}
}))))
...which presumably translates to this in vanilla Vue 3:
setup() {
useMeta(
computed(() => ({
title: this.event?.name ?? 'Event not found',
og: {
image: this.event?.bannerUrl ?? 'http://yourwebsite.com/images/default-banner.png'
}
}))
)
}
Result:
<meta property="og:image" content="http://cloudstorage.com/images/event-123.png">
References:
GitHub -> vue-meta#next -> example for vue-router
Also hinted in the readme
Part 2: SSR
Once I'd done part 1, I realized that I hadn't setup SSR... so I'm only rendering the meta for my users, not for Facebook's crawler (not very useful). I'm afraid I haven't fixed this yet on my project; perhaps someone else can pitch in that part!
Until then, maybe this will get you started:
SSR options
Vue 3's native SSR
Note on SSR in the vue-meta readme
Note: vue-meta is under the Nuxt GitHub organization => you might consider migrating to Nuxt v3 (which is built on top of Vue):
Nuxt v3 tracker issue
Slides suggesting beta this month (June 2021).
A bit late but maybe not useless for anyone facing issues with Vue 3 (and vue-meta). With the below detailed woraround, you are not dependent on any 3rd party lib.
My project is currently in development stage in local environment (so not fully tested), but a probable workaround is using beforeCreate lifecycle hook for adding meta tags if you are using Options API in Vue 3 (with vue-router), SFC way (e.g. If you are using single-file-component views for "pages" and you want them all to have their custom meta info).
In the hook method you can create DOM nodes and append them to the head like:
...
beforeCreate(){
// adding title for current view/page using vue-i18n
let title = document.createElement(`TITLE`)
title.innerText = this.$t(`something`)
document.querySelector(`head`).appendChild(title)
// adding og:image
let ogImage = document.createElement(`META`)
ogImage.setAttribute(`name`,`og:image`)
ogImage.setAttribute(`content`,`YOUR-IMAGE-URL`)
document.querySelector(`head`).appendChild(ogImage)
}
...
I'm not sure yet if this is an efficient way to make it work, but gonna try to update this post when the project is in production.
I have tested this solution with chrome plugins like this one:
Localhost Open Graph Debugger
I was having the same issues then I came across this which solves the problem for me.
Here is the link to the original post: vue3 vue-meta showing name="meta"
In vue js 3 you should use the vue3-meta or alpha version. Then do the
following
metaInfo() {
return {
htmlAttrs: { lang: 'en', amp: true },
title: "page title",
description : "Page description",
twitter: {
title: "twitter title",
description: "twitter description",
card: "twitter card",
image: "twitter image",
},
og: {
title : 'og title!',
description : 'og description!',
type : 'og type',
url : 'og url',
image : 'og image',
site_name : 'og site name',
}
}
}
if you want to use meta name then change the config in main.js
import { createMetaManager, defaultConfig, plugin as metaPlugin } from 'vue-meta'
const metaManager = createMetaManager(false, {
...defaultConfig,
meta: { tag: 'meta', nameless: true },
});
and in your component use the meta name below
metaInfo() {
return {
meta: [
{'name' : 'author', 'content' : 'author'},
{ name: 'description', content: 'authors' },
]
}
}

How do I get Watch Mode with Sanity.io and Gatsby to refresh content when referenced documents are edited in the CMS / Studio?

I'm using Sanity.io, GatsbyJS 3.x
Watch mode works great when you update content in the CMS, except for when the content you edit is part of a referenced schema of type 'document'.
Put another way, changes made to a document referenced by another document will not re-render the page despite having watch mode on and configured properly.
For example, here is a snippet from my Page schema.
...
{
name: "content",
type: "array",
title: "Page Sections",
description: "Add, edit, and reorder sections",
of: [
{
type: 'reference',
to: [
{ type: 'nav' },
{ type: 'section' },
{ type: 'footer' }
]
}
],
},
...
The above schema references a
nav schema
section schema
footer schema
Each of these are type 'document'.
See the example below.
export default {
type: 'document',
name: 'section',
title: 'Page Sections',
fields: [
{
name: 'meta',
title: 'Section Meta Data',
type: 'meta'
},
...
I want to reference a document, rather than an object, because I need to use the content created based on these schemas to be re-used in throughout the application.
Finally, I've configured the source plugin correctly for watch mode.
Gatsby Config is set properly
{
resolve: `gatsby-source-sanity`,
options: {
projectId: `asdfasdf`,
dataset: `template`,
watchMode: true,
overlayDrafts: true,
token: process.env.MY_SANITY_TOKEN,
},
},
In the CMS / Studio, when you edit one of the fields, you can see Gatsby re-compile in dev mode from the terminal. However, the page does not auto reload and display the changes made to the referenced document.
I've tried reloading the page with the reload button and via hard refresh, the changes do not render.
The only way to render the changes is to go back to the CMS and edit a field on the main “Page” document. Then it refreshes immediately.
Am I doing something wrong? Is this expected behavior? Is there a way to get this to work?
For those that run across this issue, I was able to answer my own question. I hope this saves you the day's it took me to find a solution.
Solution TLDR
You need to explicitly query the referenced document in order for watch mode to work properly.
Details with Examples
Summary
The gatsby-source-sanity plugin provides convenience queries that start with _raw for array types. When you use the _raw query in your GraphQL query, it will not trigger watch mode to reload the data. You need to explicitly query the referenced document in order for watch mode to work properly. This may have to do with how the plugin sets up listeners and I don't know if this is a bug or a feature.
Example
My Page Document has the following schema
{
name: "content",
type: "array",
title: "Page Sections",
description: "Add, edit, and reorder sections",
of: [
{
type: "reference",
to: [
{ type: "nav" },
{ type: 'section' },
],
},
],
},
The section is a reference to a section document.
{ type: 'section' }
The reason I'm not using an object is because I want the page sections to be re-usable on multiple pages.
Assuming you have watch mode enabled properly in your gatsby-config.js file, watch mode, like so...
// gatsby-config.js
{
resolve: `gatsby-source-sanity`,
options: {
projectId: `asdf123sg`,
dataset: `datasetname`,
watchMode: true,
overlayDrafts: true,
token: process.env.SANITY_TOKEN,
},
},
Then you should see the following behavior:
listen for document/content updates
re-run queries, update the data, hot-reload the page
You'll see the following scroll in your terminal window.
success Re-building development bundle - 1.371s
success building schema - 0.420s
success createPages - 0.020s
info Total nodes: 64, SitePage nodes: 9 (use --verbose for breakdown)
success Checking for changed pages - 0.001s
success update schema - 0.081s
success onPreExtractQueries - 0.006s
success extract queries from components - 0.223s
success write out requires - 0.002s
success run page queries - 0.010s - 1/1 99.82/s
This works great if you are querying the main document or any referenced objects. However, if you are querying any references to another document then there is one gotcha you need to be aware of.
The Gotcha
When you use the _raw query in your GraphQL query, it will not trigger watch mode to reload the data. You need to explicitly query the referenced document in order for watch mode to work properly.
Example: This Query will NOT work
export const PageQuery = graphql`
fragment PageInfo on SanityPage {
_id
_key
_updatedAt
_rawContent(resolveReferences: {maxDepth: 10})
}
`
Example: This query WILL Work
export const PageQuery = graphql`
fragment PageInfo on SanityPage {
_id
_key
_updatedAt
_rawContent(resolveReferences: {maxDepth: 10})
content {
... on SanitySection {
id
}
}
}
`
This additional query is the key
Here is where I am explicitly querying the document that is being referenced in the 'content' array.
content {
... on SanitySection {
id
}
}
You don't actually need to use the data that results from that query, you simply need to include this in your query.
My guess is that this informs the gatsby-source-sanity plugin to set up a listener, whereas the _rawContent fragment does not.
Not sure if this is a feature, bug, or just expected behavior. At the time of writing the versions were as follows.
"gatsby": "3.5.1",
"gatsby-source-sanity": "^7.0.0",

Set common key prefix for S3 bucket per CKFinder 3 instance

How can I make the CKFinder ASP.net S3 integration load content from a dynamic key prefix rather than just a root location?
I'm using CKEditor 5 and CKFinder 3 with the ASP.net Connector to allow image upload directly to an S3 bucket. The web application we are connecting this all to is not an ASP.net application.
Setting is up was simple enough by following the documentation.
However, our product is SaaS, so each time the CKFinder is launched, I need it to target a different key prefix in our bucket. Multiple websites run off the same app and each should be able to have their own gallery of images loaded via the CKFinder without being able to see the images belonging to other apps.
Our CKFinder Web.config:
<backend name="s3Bucket" adapter="s3">
<option name="bucket" value="myBucket" />
<option name="key" value="KEYHERE" />
<option name="secret" value="SECRETHERE" />
<option name="region" value="us-east-1" />
<option name="root" value="images" />
</backend>
This config gets content into the /images/ common key prefix "folder" just great, but for each app that uses the CKFinder, I want it to read from a different "root":
/images/app1Id/
/images/app2Id/
/images/app3Id/
Ideally, I want to set this when invoking the Editor/Finder instance; something like:
ClassicEditor.create( document.querySelector( '#textareaId' ), {
ckfinder: {
uploadUrl: '/ckfinder/connector?command=QuickUpload&type=Images&responseType=json',
connectorRoot: '/images/app1Id/'
},
toolbar: [ 'heading', '|', 'bold', 'italic', 'link', 'bulletedList', 'numberedList', 'blockQuote', 'ckfinder' ],
heading: {
options: [
{ model: 'paragraph', title: 'Paragraph', class: 'ck-heading_paragraph' },
{ model: 'heading1', view: 'h1', title: 'Heading 1', class: 'ck-heading_heading1' },
{ model: 'heading2', view: 'h2', title: 'Heading 2', class: 'ck-heading_heading2' }
]
}
});
Here I added connectorRoot: '/images/app1Id/' as an example of what I would like to pass.
Is there some way to do something like this? I've read through the ASP.net Connector docs and see that you can build your own connector and use pass to send it data, but having to compile and maintain a custom connector does not sound very fun. The S3 connectivity here is so great and easy... if only it let me be a little more specific.
The solution we came to was to modify and customize the CKFinder ASP Connector. Big thanks to the CKSource team for helping us to get this running.
ConnectorConfig.cs
namespace CKSource.CKFinder.Connector.WebApp
{
using System.Configuration;
using System.Linq;
using CKSource.CKFinder.Connector.Config;
using CKSource.CKFinder.Connector.Core.Acl;
using CKSource.CKFinder.Connector.Core.Builders;
using CKSource.CKFinder.Connector.Host.Owin;
using CKSource.CKFinder.Connector.KeyValue.FileSystem;
using CKSource.FileSystem.Amazon;
//using CKSource.FileSystem.Azure;
//using CKSource.FileSystem.Dropbox;
//using CKSource.FileSystem.Ftp;
using CKSource.FileSystem.Local;
using Owin;
public class ConnectorConfig
{
public static void RegisterFileSystems()
{
FileSystemFactory.RegisterFileSystem<LocalStorage>();
//FileSystemFactory.RegisterFileSystem<DropboxStorage>();
FileSystemFactory.RegisterFileSystem<AmazonStorage>();
//FileSystemFactory.RegisterFileSystem<AzureStorage>();
//FileSystemFactory.RegisterFileSystem<FtpStorage>();
}
public static void SetupConnector(IAppBuilder builder)
{
var allowedRoleMatcherTemplate = ConfigurationManager.AppSettings["ckfinderAllowedRole"];
var authenticator = new RoleBasedAuthenticator(allowedRoleMatcherTemplate);
var connectorFactory = new OwinConnectorFactory();
var connectorBuilder = new ConnectorBuilder();
var connector = connectorBuilder
.LoadConfig()
.SetAuthenticator(authenticator)
.SetRequestConfiguration(
(request, config) =>
{
config.LoadConfig();
var defaultBackend = config.GetBackend("default");
var keyValueStoreProvider = new FileSystemKeyValueStoreProvider(defaultBackend);
config.SetKeyValueStoreProvider(keyValueStoreProvider);
// Remove dummy resource type
config.RemoveResourceType("dummy");
var queryParameters = request.QueryParameters;
// This code lacks some input validation - make sure the user is allowed to access passed appId
string appId = queryParameters.ContainsKey("appId") ? Enumerable.FirstOrDefault(queryParameters["appId"]) : string.Empty;
// set up an array of StringMatchers for folder to hide!
StringMatcher[] hideFoldersMatcher = new StringMatcher[] { new StringMatcher(".*"), new StringMatcher("CVS"), new StringMatcher("thumbs"), new StringMatcher("__thumbs") };
// image type resource setup
var fileSystem_Images = new AmazonStorage(secret: "SECRET-HERE",
key: "KEY-HERE",
bucket: "BUCKET-HERE",
region: "us-east-1",
root: string.Format("images/{0}/userimages/", appId),
signatureVersion: "4");
string[] allowedExtentions_Images = new string[] {"gif","jpeg","jpg","png"};
config.AddBackend("s3Images", fileSystem_Images, string.Format("CDNURL-HERE/images/{0}/userimages/", appId), false);
config.AddResourceType("Images", resourceBuilder => {
resourceBuilder.SetBackend("s3Images", "/")
.SetAllowedExtensions(allowedExtentions_Images)
.SetHideFoldersMatchers(hideFoldersMatcher)
.SetMaxFileSize( 5242880 );
});
// file type resource setup
var fileSystem_Files = new AmazonStorage(secret: "SECRET-HERE",
key: "KEY-HERE",
bucket: "BUCKET-HERE",
region: "us-east-1",
root: string.Format("docs/{0}/userfiles/", appId),
signatureVersion: "4");
string[] allowedExtentions_Files = new string[] {"csv","doc","docx","gif","jpeg","jpg","ods","odt","pdf","png","ppt","pptx","rtf","txt","xls","xlsx"};
config.AddBackend("s3Files", fileSystem_Files, string.Format("CDNURL-HERE/docs/{0}/userfiles/", appId), false);
config.AddResourceType("Files", resourceBuilder => {
resourceBuilder.SetBackend("s3Files", "/")
.SetAllowedExtensions(allowedExtentions_Files)
.SetHideFoldersMatchers(hideFoldersMatcher)
.SetMaxFileSize( 10485760 );
});
})
.Build(connectorFactory);
builder.UseConnector(connector);
}
}
}
Items of note:
Added using System.Linq; so that FirstOrDefault works when getting the appId
We removed some of the fileSystems (Azure,Dropbox,Ftp) because we do not use those in our integration
In the CKFinder web.config file, we create a 'dummy' resource type because the Finder requires at least one to be present, but we then remove it during connector config and replace it with our desired resource types <resourceTypes><resourceType name="dummy" backend="default"></resourceType>resourceTypes>
Please note and take care that you're placing some sensitive information in this file. Please consider how you version control this (or not) and you may want to take additional actions to make this more secure
Initializing a CKEditor4/CKFinder3 instance
<script src="/js/ckeditor/ckeditor.js"></script>
<script src="/js/ckfinder3/ckfinder.js"></script>
<script type="text/javascript">
var myEditor = CKEDITOR.replace( 'bodyContent', {
toolbar: 'Default',
width: '100%',
startupMode: 'wysiwyg',
filebrowserBrowseUrl: '/js/ckfinder3/ckfinder.html?type=Files&appId=12345',
filebrowserUploadUrl: '/js/ckfinder3/connector?command=QuickUpload&type=Files&appId=12345',
filebrowserImageBrowseUrl: '/js/ckfinder3/ckfinder.html?type=Images&appId=12345',
filebrowserImageUploadUrl: '/js/ckfinder3/connector?command=QuickUpload&type=Images&appId=12345',
uploadUrl: '/js/ckfinder3/connector?command=QuickUpload&type=Images&responseType=json&appId=12345'
});
</script>
Items of note:
Due to other integration requirements, are using the Manual Integration method here, which requires us to manually define our filebrowserUrls
Currently, adding &pass=appId to your filebrowserUrls or adding config.pass = 'appId'; to your config.js file does not properly pass the desired value through to the editor
I believe this only fails when using the Manual Integration method (it should work correctly if you're using CKFinder.setupCKEditor())
ckfinder.html
<!DOCTYPE html>
<!--
Copyright (c) 2007-2019, CKSource - Frederico Knabben. All rights reserved.
For licensing, see LICENSE.html or https://ckeditor.com/sales/license/ckfinder
-->
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width,initial-scale=1,user-scalable=no">
<title>CKFinder 3 - File Browser</title>
</head>
<body>
<script src="ckfinder.js"></script>
<script>
var urlParams = new URLSearchParams( window.location.search );
var myAppId = ( urlParams.has( 'appId' ) ) ? urlParams.get( 'appId' ) : '';
if ( myAppId !== '' ) {
CKFinder.start( { pass: 'appId', appId: myAppId } );
} else {
document.write( 'Error loading configuration.' );
}
</script>
</body>
</html>
Items of note:
This all seems to work much more smoothly when integrating into CKEditor5, but when integrating into CKEditor4, we experience a lot of issues getting the appId value to pass properly into the editor when utilizing the Manual Integration method for CKFinder
We modify the ckfinder.html file here to look for the desired url params and pass them into the CKFinder instance as it's started. This ensures they are passed through the entirety of the Finder instance
Check out this question for some great further details about this process as well as a more generic method of passing n params into your Finder instances: How do I pass custom values to CKFinder3 when instantiating a CKEditor4 instance?

How can i use the port.postmessage to send info from the background page to the content script in a Google Chrome extension

I've been able to send data from the background page to the content script. but this is done using sendrequest(). I will need to send data back and forth so I'm trying to figure out the correct syntax for using the port.postmessage from background page to content script. I have already read, several times, the google page on Messaging and I don't seem to get it. I even copied the code directly from the page and tested with no result. All I'm trying to do for now is send data from background page to content script using connect as opposed to sendrequest. The response from the content script I will deal with later as code with this response has been the main thorn. I just want to understand the process one step at a time without the extra knowledge of sending a response back.
I'm not sure if this contravenes the rules of this board but can someone PLEASE give me an example of some code to do this (background page and content script excerpt, the background page is the sender).
I've asked for assistance several times on this site only to be told to read the documentation or check out sites I've already visited.
If you just want any example of opening a port from the extension to a content script, here's the simplest I can think of. The background just opens a port and sends "Hello tab!" over the port, and the content script sends a message to the background any time you click on the webpage.
I think this is pretty simple, so I don't know why you are so stressed. Just make sure that the content tab is already listening when the background tries to connect (I do this by waiting until the "complete" event).
manifest.json:
{
"name": "TestExt",
"version": "0.1",
"background_page": "background.html",
"content_scripts": [{
"matches": ["http://localhost/*"], // same as background.html regexp
"js": ["injected.js"]
}],
"permissions": [
"tabs" // ability to inject js and listen to onUpdated
]
}
background.html:
<script>
var interestingTabs = {};
chrome.tabs.onUpdated.addListener(function(tabId, changeInfo, tab) {
// same as manifest.json wildcard
if (changeInfo.url && /http:\/\/localhost(:\d+)?\/(.|$)/.test(changeInfo.url)) {
interestingTabs[tabId] = true;
}
if (changeInfo.status === 'complete' && interestingTabs[tabId]) {
delete interestingTabs[tabId];
console.log('Trying to connect to tab ' + tabId);
var port = chrome.tabs.connect(tabId);
port.onMessage.addListener(function(m) {
console.log('received message from tab ' + tabId + ':');
console.log(m);
});
port.postMessage('Hello tab!');
}
});
</script>
injection.js:
chrome.extension.onConnect.addListener(function(port) {
console.log('Connected to content script!');
port.onMessage.addListener(function(m) {
console.log('Received message:');
console.log(m);
});
document.documentElement.addEventListener('click', function(e) {
port.postMessage('User clicked on a ' + e.target.tagName);
}, true);
});
Detailed documentation and easy (the most basic) examples shown in the documentation page.
Plus, a quick search in stackoverflow will allow you to see many similar questions with detailed answers.

Chrome Extensions and loading external Google APIs Uncaught ReferenceError

Right now I'm in the process of creating a Chrome extension. For it, I need to use Google's Calendar Data API. Here is my manifest.json file:
{
"name": "Test",
"version": "1.0",
"background_page": "background.html",
"content_scripts": [
{
"matches": ["<all_urls>"],
"js": ["jquery.js", "content_script.js"]
}
],
"permissions": [
"tabs", "http://*/*"
]
}
I've tried adding the following to the js part of the manifest file but that throws an error when loading the extension.
http://www.google.com/jsapi?key=keyhere
I've also tried adding
document.write('<script type="text/javascript" src="http://www.google.com/jsapi?key=keyhere"></script>');
to my background.html file. However, whenever I call
google.load("gdata", "1");
I get an error that says Uncaught ReferenceError: google is not defined. Why is my extension not loading this api when it loads the other ones fine?
You can't include external script into content_scripts.
If you want to inject <script> tag using document.write then you need to mask slash in a closing tag:
document.write('...<\/script>');
You can include your external api js into background page just as usual though:
<script type="text/javascript" src="http://www.google.com/jsapi?key=keyhere"></script>
If you need this api in content scripts then you can send a request to your background page and ask it to do API-dependent stuff there, and then send a result back to your content script.
Thanks for that link, it helped a lot. However, now I've run into another interesting problem.
for (var i = rangeArray.length - 1; i >= 0; i--) {
var myLink = document.createElement('a');
myLink.setAttribute('onclick','helloThere();');
myLink.innerText ="GO";
rangeArray[i].insertNode(myLink);
}
Now I get the error, "helloThere is not defined" even though I placed that function about ten lines above the current function that has the above loop in the same file. Why might this be happening? And if I do:
for (var i = rangeArray.length - 1; i >= 0; i--) {
var myLink = document.createElement('a');
myLink.setAttribute('onclick','chrome.extension.sendRequest({greeting: "hello"}, function(response) { });');
myLink.innerText ="GO";
rangeArray[i].insertNode(myLink);
}
I get Uncaught TypeError: Cannot call method 'sendRequest' of undefined
This is because there is some syntax error in your code. I had same problem. I opened my background.html page in fire fox with fire-bug plug-in enabled. Fire-bug console should me the error, I fixed and it is working now.