Does CSS validity affect SEO? [closed] - seo

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Many front-end developers know that valid HTML has a positive effect on SEO. But does a valid CSS affect SEO?
On the Google website on the page https://support.google.com/webmasters/answer/100782?hl=en it says: "Clean, valid HTML is a good insurance policy, and using CSS separates presentation from content, and can help pages render and load faster. Validation tools, such as the free online HTML and CSS validators provided by the W3 Consortium, are useful for checking your site, and tools such as HTML Tidy can help you quickly and easily clean up your code."
But if you look at the world’s most popular front-end component library, for example, you can see that even it contains a lot of errors and warnings from the W3C validator.
:root{
--blue:#007bff;
--indigo:#6610f2;
--purple:#6f42c1;
--pink:#e83e8c;
--red:#dc3545;
--orange:#fd7e14;
--yellow:#ffc107;
--green:#28a745;
--teal:#20c997;
--cyan:#17a2b8;
--white:#fff;
--gray:#6c757d;
--gray-dark:#343a40;
--primary:#007bff;
--secondary:#6c757d;
--success:#28a745;
--info:#17a2b8;
--warning:#ffc107;
--danger:#dc3545;
--light:#f8f9fa;
--dark:#343a40;
--breakpoint-xs:0;
--breakpoint-sm:576px;
--breakpoint-md:768px;
--breakpoint-lg:992px;
--breakpoint-xl:1200px;
--font-family-sans-serif: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";
--font-family-monospace: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace
} ... <= This is just a small part of the file bootstrap.min.css.
The validator https://jigsaw.w3.org/css-validator/ says there are 30 errors and 670 warnings in the Bootstrap. But this CSS is used on many successful sites. Would they be even more successful in SEO if they used a valid CSS?

In their general guidelines for browser compatibility, Google mention the W3C validation tools for writing good, clean code.
They also state the following:
Although we do recommend using valid HTML, it's not likely to be a factor in how Google crawls and indexes your site.
So we can assume this also is the case for CSS.
Keep in mind that there are a lot of other things in your CSS that will affect the search ranking, like mobile-friendly styles, and factors that causes slower loading times like selector specificity, file size, imports etc.

Related

Why isn't Cucumber considered as a testing tool? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I'm new to Cucumber, and I'm trying to understand the tool. While reading the documentation, I found that it is defined shortly as "a tool that supports BDD":
Cucumber is a tool that supports Behaviour-Driven Development(BDD).
Also it is described as a "validation tool":
Cucumber reads executable specifications written in plain text and validates that the software does what those specifications say.
In the other side, I noticed the excessive use of the word "test" on the 10-minute tutorial.
AFAIK, what does this tool is agile testing, since it is used massively in e2e testing (test basis = Gherkin feature specs + step definitions). However, the blog says something different:
Finally, remember that Cucumber is not a testing tool. It is a tool for capturing common understanding on how a system should work. A tool that allows you, but doesn't require you, to automate the verification of the behaviour of your system if you find it useful.
Now if this tool is not really about testing, then what use is it intended for?
TL;DR
Cucumber is a BDD framework. Its main components are:
Gherkin: a ubiquitous language used as communication tool than
anything, and can be used as a springboard for collaboration. It
helps manage the expectations of the business, and if everyone can
see the changes that you are making in a digestible format, they'll
hopefully get less frustrated with the development teams, but also
you can use it to speed up the reaction time of your team if there
are bugs, by writing the tests that cucumber wraps with the mindset
that someone is going to have to come back and debug it at some
point.
CLI implementation (or the CLI): a test runner based on
Gherkin. It is developed by volunteers who are all donating part of
their spare time. Every implementation is specific to a programming
language supporting a production code that is ready for test. It is
considered as the concrete tool / utility.
The Long Version
The intended use of Gherkin as a communication tool, describing interactions with a system from multiple perspectives (or actors), and it just so happens that you can integrate it with test frameworks, which aids in ensuring that the system in place correctly handles those interactions.
Most commonly, this is from a users perspective:
Given John has logged in
When he receives a message from Brenda
Then he should be able to click through to his message thread with Brenda via the notification
But it can also be from a component/pages perspective too:
Given the customer history table is displayed
And there have been changes to the customer table for that user since the page was first loaded
When it receives a click to refresh the data
Then the new changes should be displayed
It's all about describing the behaviours, and allowing the business and developers to collaborate freely, while breaking down the language barriers that usually end up plaguing communication, and generally making both sides frustrated at each other because of a lack of mutual understanding of an issue
This is where the "fun" begins - Anakin, Ep III
You could use these files to create an environment of "living documentation" throughout your development team (and if successful, the wider business), and in theory - worded and displayed correctly, it would be an incredible boon for customer service workers, who would more easily be able to keep up with changes, and would have extremely well described help documentation - without any real additional effort, but this isn't something that I've seen much in the wild. I've written a script at work that does this by converting the features into markdown, and alongside various other markdown tools (mermaid for graphs, tsdoc-plugin-markdown to generate the API docs, and various extensions for my chosen HTML converter, docsify) I've managed to generate something that isn't hard to navigate and open up communication between teams that previously found it harder to communicate their issues to the dev team (most people know a little markdown these days, even if it has to be described as "the characters you type in reddit threads and youtube comments to make text bold and italics, etc" for people to get what it is, but it means everyone can contribute to it)
It is an extremely useful tool when it comes to debugging tests, especially when used with the screenplay pattern (less so with the standard page object model, because of the lack of additional context that the pom provides, but it's still useful), as everything is described in a way that breeds replication of the issue from a users or components perspective if it fails.
I've paired it with flow charts, where I draw out the user interactions, pinning the features to it and being able to see in a more visual way where users will be able to do something that we might not have planned for, or even figure out some garish scenario that we somehow missed.
The Long Version Longer
My examples here will mostly in javascript, as we've been developing in a node environment, but if you wanted to create your own versions, it shouldn't be too different.
The Docs
Essentially, this is bit is just for displaying the feature files in a way that is easily digestible by the business (I have plans to integrate test reports into this too, and give the ability to switch branches and such)
First, you want to get a simple array of all of the files in your features folder, and pick out the ones with ".feature" on the end.
Essentially, you just need to flatten an ls here (this can be improved, but we have a requirement to use the LTS version of node, rather than the latest version in general)
const fs = require('fs');
const path = require('path');
const walkSync = (d) => fs.statSync(d).isDirectory() ? fs.readdirSync(d).map(f => walkSync(path.join(d, f))) : d;
const flatten = (arr, result = []) => {
if (!Array.isArray(arr)){
return [...result, arr];
}
arr.forEach((a) => {
result = flatten(a, result)
})
return result
}
function features (folder) {
const allFiles = flatten(walkSync(path.relative(process.cwd(), folder)))
let convertible = []
for (let file of allFiles) {
if (file.match(/.feature$/)) {
convertible.push(file)
}
}
return convertible
}
...
Going through all of those files with a Gherkin parser to pull out your scenarios requires some set up, although it's pretty simple to do, as Gherkin has an extremely well defined structure and known keywords.
There can be a lot of self referencing, as when you boil it down to the basics, a lot of cucumber is built on well defined components. For example, you could describe a scenario as a background that can have a description, tags and a name:
class Convert {
...
static background (background) {
return {
cuke: `${background.keyword.trim()}:`,
steps: this.steps(background.steps),
location: this.location(background.location),
keyword: background.keyword
}
}
static scenario (scenario) {
return {
...this.background(scenario),
tags: this.tags(scenario.tags),
cuke: `${scenario.keyword.trim()}: ${scenario.name}\n`,
description: `${scenario.description.replace(/(?!^\s+[>].*$)(^.*$)/gm, "$1<br>").trim()}`,
examples: this.examples(scenario.examples)
}
}
...
}
You can flesh it out fully to write to either a single file, or output a few markdown files (making sure to reference them in a menu file)
Flowcharts
Flow charts make it easier to help visualise an issue, and there are a few tools that use markdown to help generate them like this:
In the back, it'll end up looking like this:
### Login
Customers should be able to log into their account, as long as they have registered.
...
```mermaid
graph TD
navigateToLogin["Navigate to Login"] -->logIn{"Login"}
logIn -->validCredentials["Valid<br>Credentials"]
logIn -->invalidCredentials{"Invalid<br>Credentials"}
invalidCredentials -->blankPass["Blank Password"]
invalidCredentials -->wrongPass["Wrong Password"]
invalidCredentials -->blankEmail["Blank Email"]
invalidCredentials -->wrongEmail["Wrong Email"]
...
click blankPass "/#/areas/login/scenario-blank-password" "View Scenario"
...
```
It's essentially just a really quick way to visualise issues, and links us to the correct places in the documentation to find an answer. The tool draws out the flowchart, you just have to make the connections between key concepts or ideas on the page (i.e. a new customer gets a different start screen)
Screenplay Pattern, Serenity and Debugging
I think all that really needs to be said here is that when you run a test, this is your output:
✓ Correct information on the billing page
✓ Given Valerie has logged into her account
✓ Valerie attempts to log in
✓ Valerie visits the login page
✓ Valerie navigates to '/login'
✓ Valerie waits up to 5s until the email field does become visible
✓ Valerie enters 'thisisclearlyafakeemail#somemailprovider.com' into the email field
✓ Valerie enters 'P#ssword!231' into the password field
✓ Valerie clicks on the login button
✓ Valerie waits for 1s
It will break down any part of the test into descriptions, which means if the CSS changes, we won't be searching for something that no longer exists, and even someone new to debugging that area of the site will be able to pick up from a test failure.
Communication
I think all of that should show how communication can be improved in a more general sense. It's all about making sure that the projects are accessible to as many people who could input something valuable (which should be everyone in your business)

Bitdefender detects my console application as Gen:Variant.Ursu.56053 [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I've developed a console application that does a lot of routines, but the Antivirus detected it as a malware of type Gen:Variant.Ursu.56053.
How can I fix this without touching the antivirus policy because it's not allowed for us to create any exceptions for any found threat.
I'd like also to mention that If i changed the assembly name the antivirus is no longer consider the new file virus, but it looks that it considers it virus because I invoke it many times, with different parameters.
Any suggestions, I'm really suffering from this,
I know this thread is very old, but for people which will come here - to fix this issue simply add icon to the program, im not even joking, it works.
FALSE +VE ALERT!!! Many antivirus engines have name pattern matching as their Swiss-knife to detect malicious files,If any of them matches the name they have in their Database then you can't do much about it. Its simply became a False +ve !!! Also your assembly name should consist of the technology area and component description, or company name and technology area (depending on your preferance). So try changing it to more specific one. :)
Assuming that you are talking about .NET (with relation to Visual Studio) For Ex:
Project: Biometric Device Access
Assembly: BiometricFramework.DeviceAccess.dll
Namespace: ACME.BiometricFramework.DeviceAccess
I had the same problem with Bitdefender, but mine is a Gen:Variant.Ursu.787553 when I tried creating a .exe file from my C program.
I simply moved it out of quarantine manually, and it worked well. You might have to that every time you build a new program. Hope this helps!

Grade E on Make fewer HTTP requests in YSlow for my magento website

Grade E on Make fewer HTTP requests
This page has 3 external stylesheets. Try combining them into one.
This page has 19 external background images. Try combining them with CSS sprites.
What should i do to improve it to Grade A.
Is there anything i should in .htaccess file or anywhere else to improve this.
I have already done a lot of things and I got the score 89 but i want to improve it to Grade A.I am using apache server.
How would i do this.Please suggest someone.
Thanks
Here's a really simple way to fix your external stylesheet problem: open up your Magento admin, go to System>Configuration>Advanced>Developer and under CSS Settings set "Merge CSS Files" to Yes.
To resolve the second issue, creating a CSS sprite sheet is a good idea (though it can be a bit of a time sink unless you did it from the start). Loading your theme graphics independently causes your response time to take a pretty big hit, so the general idea is to have your site icons and background images load in a single file, then use some CSS trickery to only show them as you need them. This article on Smashing Magazine should get you started with CSS sprites: http://coding.smashingmagazine.com/2012/04/11/css-sprites-revisited/
As for overall speed optimisation, there are numerous (extensive) blog posts on this subject, just have a search for them. Make sure you know what you're doing before making changes to your server configuration, or else a slow response will be the least of your troubles!

why is LaTeX / pdflatex compiler so 'funky' with multiple compiles necessary and bogus error messages, etc, compared to c++? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is there a simple explanation for why the latex / pdflatex compiler is funky in the following two ways:
1) N multiple compiles are necessary until you reach a "steady state" version. N seems to grow up to around 5 or 6 if I use many packages and references.
2) Error messages are almost always worthless. The actual error is not flagged. Example:
\begin{itemize} % Line 499
\begin{enumerate}
% Comment: error: forgot to close the enumerate block
\item This is a bullet point.
\end{itemize} % Line 503
result: "Error on line 1 while scanning \begin{document}", not very useful.
I realize there is a separate "tex exchange" but I'm wondering if someone knowledgeable about c++, java, or other compilers can provide some insight on how those seem to support single-compile and proper error localization.
Edit: this document seems like a rant justifying the hacks in latex's implementation, but what about latex's syntax/language properties make the weird implementation necessary? http://tug.org/texlive/Contents/live/texmf-dist/doc/generic/knuth/tex/tex.pdf
From a LaTeX point of view:
You should at most require 3 (...maybe 4) to reach a steady state. This depends not on the number of packages, but possible layout changes within your document. Layout changes cause references to move, and these references need to be correct (hence the recompile until they don't move).
Nesting of environments is allowed (although this does not address your problem directly). Also, macro definitions act as replacement text for your input. So, even though you write \end{itemize}, it is actually transformed into a bunch of other/different (primitive) macros, removing the obvious-to-humans structure and consequently also the bizarre error message. That's why some of the error messages are difficult to interpret.
wrt. point (2):
Considering that most of the errors are picked up while parsing macro defenitions that get expanded, My guess is that errors wouldn't be useful to the user even if they contained locale and specific causes, because they don't translate well into what you see when you view the code.
Still, it would be useful if they were just a little bit more explicit :/

Keyword Spotting in Speech [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Is anyone aware of a Keyword Spotting System that is freely available, and possibly providing APIs ??
CMU Sphinx 4 and MS Speech API are speech recognition engines, and cannot be used for KWS.
SRI has a keyword spotting system, but no download links, not even for evaluation. (I even couldn't find anywhere a link to contact them for their software)
I found one here but it's a demo and limited.
CMUSphinx implements keyword spotting in pocketsphinx engine, see for details the FAQ entry.
To recognize a single keyphrase you can run decoder in “keyphrase search” mode.
From command line try:
pocketsphinx_continuous -infile file.wav -keyphrase “oh mighty computer” -kws_threshold 1e-20
From the code:
ps_set_keyphrase(ps, "keyphrase_search", "oh mighty computer");
ps_set_search(ps, "keyphrase_search);
ps_start_utt();
/* process data */
You can also find examples for Python and Android/Java in our sources. Python code looks like this, full example here:
# Process audio chunk by chunk. On keyphrase detected perform action and restart search
decoder = Decoder(config)
decoder.start_utt()
while True:
buf = stream.read(1024)
if buf:
decoder.process_raw(buf, False, False)
else:
break
if decoder.hyp() != None:
print ([(seg.word, seg.prob, seg.start_frame, seg.end_frame) for seg in decoder.seg()])
print ("Detected keyphrase, restarting search")
decoder.end_utt()
decoder.start_utt()
Threshold must be tuned for every keyphrase on a test data to get the right balance missed detections and false alarms. You can try values like 1e-5 to 1e-50.
For the best accuracy it is better to have keyphrase with 3-4 syllables. Too short phrases are easily confused.
You can also search for multiple keyphrase, create a file keyphrase.list like this:
oh mighty computer /1e-40/
hello world /1e-30/
other_phrase /other_phrase_threshold/
And use it in decoder with -kws configuration option.
pocketsphinx_continuous -inmic yes -kws keyphrase_list
This feature is not yet implemented in sphinx4 decoder.