Styling for responsive HTML email - attribute="x" vs. style="attribute:x;" - html-table

When styling tables for emails (or for any project), what is the significance of:
<table cellpadding="10px">
over:
<table style="padding:10px;">

In email you would use <table cellpadding="10">.
In the case of what you posted, some email clients will ignore attribute="10px" completely, which is why you would use attribute="10" instead.
The problem with inline styles is Outlook tends to ignore them. It's selective on what it ignores.
This style might be deprecated in HTML 5, but email clients like Outlook 2007-2019 do not follow best practices.
Email development is not web development. It's a different style and set of rules you need to follow for optimal results.
Good luck.

Related

ID's Required for UI automation in selenium in all html pages

There is a project where we are going to automate the UI but the Automation team is suggesting that we have to use ID's all over the page so that it will be easy to automate there script.
My Question here is why we will use ID's everywhere ? hampering the Html and Css structure.
The webpage can be automated without ID's in html yes or no ?
Yes, a web page can be automated without ID's. For example, you can play with cssSelectors here https://www.w3schools.com/cssref/trysel.asp (note that example page has elements with and without ids)
Using ids for element's lookup in automation is generally considered as a best practice. If you use ids your automation tests will become independent of html structures which basically will make them more stable.
For example, in the first version of your app you may have some text implemented as
<p id="someTextId" class="someClass">Hello world</p>
but at some point may decide to rewrite it as (change the tag and even apply different class name)
<div id="someTextId" class="anotherClass">Hello world</div>
In case you rely on id #someTextId to locate an element your test will still be able to access necessary element and interact with it properly. If you use p or .someClass your automation test will fail to find an element even though from the ui perspective the same text will be displayed in a browser.
We faced several downsides of using id:
Some frameworks do not recommend using them or generate them automatically. (Potential issues with ids described here https://www.javascriptstuff.com/use-refs-not-ids/, https://www.quora.com/Are-IDs-a-big-no-no-in-the-CSS-world, https://dev.to/claireparker/reasons-not-to-use-ids-in-css-4ni4, https://www.creativebloq.com/css3/avoid-css-mistakes-10135080, https://www.reddit.com/r/webdev/comments/3ge2ma/why_some_people_dont_use_ids_at_all/)
Some other logic may rely on them, so changing/adding them for the need of automation may somehow affect other app logic unexpectedly.
What you can use instead of id is some other attribute. For example in our projects, we have switched from id to a specific attribute named dataSeleniumId. It clearly shows that the attribute is selenium tests usage only. As a next step, you can add a rule in your team when someone changes or removed dataSeleniumId attribute he should inform automation testing team about it. As changing/removing this attribute will lead to test failures and to avoid any false failures it is better to fix it in advance.
For an automation developer its much easier to browse trough the html code and see the id of specific button/text field/etc.. to implement the relevant locator inside the automated test.
In most cases, the project start to receive duplication of classes or complicated nested elements. This make the life of automation dev harder, because of writing xpath or css selectors, verify that they work and this locator finds only 1 unique element.
Its up to the team and code style suggested from the team leader.
Back on the question, yes the website can be written without id's but if the goals is to automate large part of the website, id's would be great helper to the automation dev team.

Security - Protecting an insert statement from malicious code

I'm building a commenting system. The comment is sent to a stored procedure in SQL.
What is the best way to prevent html, script, or SQL queries to be injected into the table? I want to do this server-side.
For example:
INSERT INTO MyTable (UserID, Comment) VALUES (#UserID, #Comment)
What would be the best way to deal with the comment field and remove any potential HTML, Scripts, or Queries to prevent attacks? Or to drop the insert if it contains certain characters? Eventually I want the user to be able to insert a link though, which would render in on the site as a clickable link...
Just new to this security stuff and obviously it's important.
Thank you so much.
Use parameterised statements (as you appear to be doing) with parameters for all variables and you have nothing to worry about from SQL injection.
HTML and JS injections are a concern to do with the page output phase, not database storage. Trying to do HTML escaping or validation in the database layer will be frustrating and fruitless: it's not the right place to be dealing with those concerns, you'll miss or mis-handle data, and the tools for string manipulation in SQL are weak.
Don't think in terms of detecting “attacks”, because blacklists will always fail. Instead aim to handle all text correctly, and then you'll be secure as a side effect of being accurate. Variable text that you drop into an HTML file needs to be HTML-escaped; variable text that you drop into a JavaScript string literal needs to be JS-escaped.
If you're using standard .NET templates, use the <%: syntax to HTML-escape text. Use that as your output tag instead of <%= and you'll be fine. Similarly, if you're using WebForms, use the controls whose Text property is automatically HTML-escaped. (Unfortunately this is inconsistent.) Where you have to generate markup directly, use HttpUtility.HtmlEncode explicitly.
Encoding for JavaScript string literals is a little trickier. There is HttpUtility.JavaScriptStringEncode, but JS strings commonly live inside HTML <script> blocks (making the </ sequence dangerous where it isn't in native JS), or in HTML inline event handlers (where you would need to JS-encode and then HTML-encode as well). It tends to be a better strategy to encode the data you want to send to JS in the DOM using regular HTML-escaping, for example in a data- attribute or an <input type="hidden">, and have the JS grab the value from the DOM.
If you really have to allow the user to input custom markup, then you'll need to filter it at input time to a small whitelist of approved elements and attributes. Use an existing HTML purifier library.

Is there any SEO risk of using Javascript instead of real links?

Essentially I have a client who wants to change some links from something like:
Click me
to something like:
<span style="color:blue;cursor:pointer;" id="faux-link">Click me</span>
<script type="text/javascript">
$("#faux-link").click(function() {
document.location = "http://www.google.com/";
});
</script>
Essentially this would make the "Click me" text in the same way minus a few advanced link features (Mouse3 opens link in new tab, right clicking to see "Open in New Window" and other options, etc) also it would obvously not work for anything with Javascript disabled (or if Javascript on the page had any fatal errors)
Are there any SEO downsides to this that anyone has experienced or any kind of comments from Google or others on this type of behavior?
In the first example (Click me) you use stander <a> tag. But even though it uses rel="nofollow" attribute, some web spiders may still follow the link. A bit more on that on Nofollow in Google, Yahoo and MSN article.
In the second example you use different way of building a link (using JavaScript and different than <a> HTML tags like <span>). Googlebot can execute some JavaScript, but I do not believe it would execute large libraries like jQuery.
Please check interview with Matt Cutts for more details. Quotation from that interview:
Matt Cutts: For a while, we were scanning within JavaScript, and we
were looking for links. Google has gotten smarter about JavaScript and
can execute some JavaScript. I wouldn't say that we execute all
JavaScript, so there are some conditions in which we don't execute
JavaScript. Certainly there are some common, well-known JavaScript
things like Google Analytics, which you wouldn't even want to execute
because you wouldn't want to try to generate phantom visits from
Googlebot into your Google Analytics.
As I understand, in both examples, it was intended to stop web spiders from crawling or indexing those links. I think (no evidence or article supporting that) that using the later approach will not affect SEO significantly.
use normal links (if not the sites will not be indexed in google!) and use prevent default (javascript) for links that you will specify. read preventdefault()
The links that you want to provide in the click event are visible in the source of the page, so anybody that wants to view them can do it really easy. So, as you said, some features of legacy link will be disabled, so you can use normal links instead of jQuery.

SharePoint 2010: what's the recommended way to store news?

To store news for a news site, what's a good recommendation?
So far, I'm opting for creating a News Site, mainly because: I get some web parts for free (RSS, "week in pictures"), workflows in place and authoring experience in SharePoint seeems reasonable.
On the other hand, I see for example that, by just creating a Document Library, I can store Word documents based on "Newsletter" template and saved as web page and they look great, and the authoring experience in Word is better than that on SharePoint.
And what about just creating a blog site!
Anyway, what would people do? Am I missing a crucial factor here for one or the other? What's a good trade-off here?
Thanks!
From my experience, the best option would be to
Create a new News Site
Create a custom content type having properties like Region (Choice), Category (Choice), Show on homepage (Boolean) , Summary (Note) etc.
Create a custom page layout attached to above content type. Give it a look and feel you want your news article to look like.
Attach the page layout as default content type to Pages Library of News site.
The advantages of this approach is that you can use CQWP web part on the home page to show latest 5 articles. You can also show a one liner or a picture if you also make it a property in custom content type.
By Storing News in a word document, you are not really using SharePoint as Publishing Environment but only as repository. Choice is yours.
D. All of the above
SharePoint gives you a lot of options because there is no one sized solution that works for everyone. The flexibility of options is not to overwhelm you with choices, but rather to allow you to focus on your process, either how it exists now or how you want it to be, and then select the option that best fits your process.
My company's intranet is a team site and news is placed into an Announcements list. We do not need any flashy. The plain text just needs to be communicated to the employees. On the other hand, our public internet site is a publishing site, which gives our news pages a more finished touch in terms of styling and images. It also allows us to take advantage of scheduling, content roll-up, friendly URLs along with the security of locking down the view forms. Authoring and publishing such a page is more involved than the Announcements list, but each option perfectly fits what we want to accomplish in each environment.
Without knowing more about your needs or process, based only on your highlighting Word as the preferred authoring tool, I would recommend a Blog. It is not as fully featured as a publishing site, but there is some overlap. And posts can be authored in Word.
In the end, if you can list what you want to accomplish, how you want to accomplish it, and pick the closest option (News Site, Team Site, Publishing Site, Blog, Wiki, etc), then you will have made the correct choice.
I tend to use news publishing sites, for what you said and page editing features.
It also allows you to set scheduled go-live and un-publish dates which is kind of critical for news items.

Stop spam without captcha

I want to stop spammers from using my site. But I find CAPTCHA very annoying. I am not just talking about the "type the text" type, but anything that requires the user to waste his time to prove himself human.
What can I do here?
Requiring Javascript to post data blocks a fair amount of spam bots while not interfering with most users.
You can also use an nifty trick:
<input type="text" id="not_human" name="name" />
<input type="text" name="actual_name" />
<style>
#not_human { display: none }
</style>
Most bots will populate the first field, so you can block them.
I combine a few methods that seem quite successful so far:
Provide an input field with the name email and hide it with CSS
display: none. When the form is submitted check if this field is
empty. Bots tend to fill this with a bogus emailaddress.
Provide another hidden input field which contains the time the page
is loaded. Check if the time between loading and submitting the page
is larger the minimum time it takes to fill in the form. I use
between 5 and 10 seconds.
Then check if the number of GET parameters are as you would expect.
If your forms action is POST and the underlying URL of your
submission page is index.php?p=guestbook&sub=submit, then you
expect 2 GET parameters. Bots try to add GET parameters so this
check would fail.
And finally, check if the HTTP_USER_AGENT is set, which bots sometimes don't set,
and that the HTTP_REFERER is the URL of the page of your form. Bots
sometimes just POST to the submission page causing the HTTP_REFERER
to be something else.
I got most of my information from http://www.braemoor.co.uk/software/antispam.shtml and http://www.nogbspam.com/.
Integrate the Akismet API to automatically filter your users' posts.
If you're looking for a .NET solution, the Ajax Control Toolkit has a control named NoBot.
NoBot is a control that attempts to provide CAPTCHA-like bot/spam prevention without requiring any user interaction. NoBot has the benefit of being completely invisible. NoBot is probably most relevant for low-traffic sites where blog/comment spam is a problem and 100% effectiveness is not required.
NoBot employs a few different anti-bot techniques:
Forcing the client's browser to perform a configurable JavaScript calculation and verifying the result as part of the postback. (Ex: the calculation may be a simple numeric one, or may also involve the DOM for added assurance that a browser is involved)
Enforcing a configurable delay between when a form is requested and when it can be posted back. (Ex: a human is unlikely to complete a form in less than two seconds)
Enforcing a configurable limit to the number of acceptable requests per IP address per unit of time. (Ex: a human is unlikely to submit the same form more than five times in one minute)
More discussion and demonstration at this blogpost by Jacques-Louis Chereau on NoBot.
<ajaxToolkit:NoBot
ID="NoBot2"
runat="server"
OnGenerateChallengeAndResponse="CustomChallengeResponse"
ResponseMinimumDelaySeconds="2"
CutoffWindowSeconds="60"
CutoffMaximumInstances="5" />
I would be careful using CSS or Javascript tricks to ensure a user is a genuine real life human, as you could be introducing accessibility issues, cross browser issues, etc. Not to mention spam bots can be fairly sophisticated, so employing cute little CSS display tricks may not even work anyway.
I would look into Akismet.
Also, you can be creative in the way you validate user data. For example, let's say you have a registration form that requires a user email and address. You can be fairly hardcore in how you validate the email address, even going so far as to ensure the domain is actually set up to receive mail, and that there is a mailbox on that domain that matches what was provided. You could also use Google Maps API to try and geolocate an address and ensure it's valid.
To take this even further, you could implement "hard" and "soft" validation errors. If the mail address doesn't match a regex validation string, then that's a hard fail. Not being able to check the DNS records of the domain to ensure it accepts mail, or that the mailbox exists, is a "soft" fail. When you encounter a soft fail, you could then ask for CAPTCHA validation. This would hopefully reduce the amount of times you'd have to push for CAPTCHA verification, because if you're getting enough activity on the site, valid people should be entering valid data at least some of the time!
I realize this is a rather old post, however, I came across an interesting solution called the "honey-pot captcha" that is easy to implement and doesn't require javascript:
Provide a hidden text box!
Most spambots will gladly complete the hidden text box allowing you to politely ignore them.
Most of your users will never even know the difference.
To prevent a user with a screen reader from falling into your trap simply label the text box "If you are human, leave blank" or something to that affect.
Tada! Non-intrusive spam-blocking! Here is the article:
http://www.campaignmonitor.com/blog/post/3817/stopping-spambots-with-two-simple-captcha-alternatives
Since it is extremely hard to avoid it at 100% I recommend to read this IBM article posted 2 years ago titled 'Real Web 2.0: Battling Web spam', where visitor behavior and control workflow are analyzed well and concise
Web spam comes in many forms, including:
Spam articles and vandalized articles on wikis
Comment spam on Weblogs
Spam postings on forums, issue trackers, and other discussion sites
Referrer spam (when spam sites pretend to refer users to a target
site that lists referrers)
False user entries on social networks
Dealing with Web spam is very difficult, but a Web developer
neglects spam prevention at his or her
peril. In this article, and in a
second part to come later, I present
techniques, technologies, and services
to combat the many sorts of Web spam.
Also is linked a very interesting "...hashcash technique for minimizing spam on Wikis and such, in addition to e-mail."
How about a human readable question that tells the user to put in the first letter of the value he put in the first name field and the last letter of the last name field or something like this?
Or show some hidden fields which are filled with JavaScript with values like referer and so one. Check for equality of these fields with the ones you have stored in the session before.
If the values are empty, the user has no javascript. Then it would be no spam. But a bot will at least fill in some of them.
Surely you should select one thing Honeypot or BOTCHA.