I have a very simple .emacs file in my home directory, I'm trying to get it to indent 3 spaces when I hit tab. instead I always get 2 spaces, which is the default behavior. It seems to completely ignore my .emacs file.... ?
here is the contents of the .emacs. If it's correct (seems to be...) it must be getting ignored? It's damn short, not much to go wrong:
;; -*-Emacs-Lisp-*-
;; This file is designed to be re-evaled; use the variable first-time
;; to avoid any problems with this.
(setq c++-mode-hook
(function (lambda ()
(setq indent-tabs-mode nil)
(setq c-indent-level 3))))
(custom-set-variables
'(tab-stop-list (quote (3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75))))
(setq indent-tabs-mode nil)
(setq tab-width 3)
Try setting the variable c-basic-offset
(setq c-basic-offset 3)
See the cc-manual section Getting Started for more details. There are many many ways to customize the indentation behavior.
I'd like to reopen this old question. #user1021810 was able to fix her/his problem by adding the homedir to $PATH. I did that too. But it seems that my .emacs is being ignored. Additionally, I also tried
creating a file .emacs.el in my homedir
creating a file init.el in ~/.emacs.d dir
Nothing seems to work. Ideas sought.
Thanks.
Update: The contents of my .emacs file are valid because those effects take place as soon as I load the file manually.
Related
I am trying to make a lag machine schematic in 1.12.2, and instead of placing 10000 Armor Stands by hand, I was wondering if there was any way to /fill them or use something with World Edit to do the same thing. I looked all over google but could not find anything that worked. I found a video (https://www.youtube.com/watch?v=LySru5q3j1U) however it did not work for me nor did it have any error messages, any help is appreciated.
/fill 351 8 223 192 8 64 torch(regular command block)
/execute #e[type=item.item.torch] ~ ~ ~ summon ArmorStand(chain command block)
Try setting your chain command block to Always Active.
Chain command blocks don't run if they aren't active.
The following, when copied and pasted directly into R works fine:
> character_test <- function() print("R同时也被称为GNU S是一个强烈的功能性语言和环境,探索统计数据集,使许多从自定义数据图形显示...")
> character_test()
[1] "R同时也被称为GNU S是一个强烈的功能性语言和环境,探索统计数据集,使许多从自定义数据图形显示..."
However, if I make a file called character_test.R containing the EXACT SAME code, save it in UTF-8 encoding (so as to retain the special Chinese characters), then when I source() it in R, I get the following error:
> source(file="C:\\Users\\Tony\\Desktop\\character_test.R", encoding = "UTF-8")
Error in source(file = "C:\\Users\\Tony\\Desktop\\character_test.R", encoding = "utf-8") :
C:\Users\Tony\Desktop\character_test.R:3:0: unexpected end of input
1: character.test <- function() print("R
2:
^
In addition: Warning message:
In source(file = "C:\\Users\\Tony\\Desktop\\character_test.R", encoding = "UTF-8") :
invalid input found on input connection 'C:\Users\Tony\Desktop\character_test.R'
Any help you can offer in solving and helping me to understand what is going on here would be much appreciated.
> sessionInfo() # Windows 7 Pro x64
R version 2.12.1 (2010-12-16)
Platform: x86_64-pc-mingw32/x64 (64-bit)
locale:
[1] LC_COLLATE=English_United Kingdom.1252
[2] LC_CTYPE=English_United Kingdom.1252
[3] LC_MONETARY=English_United Kingdom.1252
[4] LC_NUMERIC=C
[5] LC_TIME=English_United Kingdom.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods
[7] base
loaded via a namespace (and not attached):
[1] tools_2.12.1
and
> l10n_info()
$MBCS
[1] FALSE
$`UTF-8`
[1] FALSE
$`Latin-1`
[1] TRUE
$codepage
[1] 1252
On R/Windows, source runs into problems with any UTF-8 characters that can't be represented in the current locale (or ANSI Code Page in Windows-speak). And unfortunately Windows doesn't have UTF-8 available as an ANSI code page--Windows has a technical limitation that ANSI code pages can only be one- or two-byte-per-character encodings, not variable-byte encodings like UTF-8.
This doesn't seem to be a fundamental, unsolvable problem--there's just something wrong with the source function. You can get 90% of the way there by doing this instead:
eval(parse(filename, encoding="UTF-8"))
This'll work almost exactly like source() with default arguments, but won't let you do echo=T, eval.print=T, etc.
We talked about this a lot in the comments to my previous post but I don't want this to get lost on page 3 of comments: You have to set the locale, it works with both input from the R-console (see screenshot in comments) as well as with input from file see this screenshot:
The file "myfile.r" contains:
russian <- function() print ("Американские с...");
The console contains:
source("myfile.r", encoding="utf-8")
> Error in source(".....
Sys.setlocale("LC_CTYPE","ru")
> [1] "Russian_Russia.1251"
russian()
[1] "Американские с..."
Note that the file-in fails and it points to the same character as the original poster's error (the one after "R). I can not do this with Chinese because i would have to install "Microsoft Pinyin IME 3.0", but the process is the same, you just replace the locale with "chinese" (the naming is a bit inconsistent, consult the documentation).
I think the problem lies with R. I can happily source UTF-8 files, or UCS-2LE files with many non-ASCII characters in. But some characters cause it to fail. For example the following
danish <- function() print("Skønt H. C. Andersens barndomsomgivelser var meget fattige, blev de i hans rige fantasi solbeskinnede.")
croatian <- function() print("Dodigović. Kako se Vi zovete?")
new_testament <- function() print("Ne provizu al vi trezorojn sur la tero, kie tineo kaj rusto konsumas, kaj jie ŝtelistoj trafosas kaj ŝtelas; sed provizu al vi trezoron en la ĉielo")
russian <- function() print ("Американские суда находятся в международных водах. Япония выразила серьезное беспокойство советскими действиями.")
is fine in both UTF-8 and UCS-2LE without the Russian line. But if that is included then it fails. I'm pointing the finger at R. Your Chinese text also appears to be too hard for R on Windows.
Locale seems irrelevant here. It's just a file, you tell it what encoding the file is, why should your locale matter?
For me (on windows) I do:
source.utf8 <- function(f) {
l <- readLines(f, encoding="UTF-8")
eval(parse(text=l),envir=.GlobalEnv)
}
It works fine.
Building on crow's answer, this solution makes RStudio's Source button work.
When hitting that Source button, RStudio executes source('myfile.r', encoding = 'UTF-8')), so overriding source makes the errors disappear and runs the code as expected:
source <- function(f, encoding = 'UTF-8') {
l <- readLines(f, encoding=encoding)
eval(parse(text=l),envir=.GlobalEnv)
}
You can then add that script to an .Rprofile file, so it will execute on startup.
I encounter this problem when a try to source a .R file containing some Chinese characters. In my case, I found that merely set "LC_CTYPE" to "chinese" is not enough. But setting "LC_ALL" to "chinese" works well.
Note that it's not enough to get encoding right when you read or write plain text file in Rstudio (or R?) with non-ASCII. The locale setting counts too.
PS. the command is Sys.setlocale(category = "LC_CTYPE",locale = "chinese"). Please replace locale value correspondingly.
On windows, when you copy-paste a unicode or utf-8 encoded string into a text-control that is set to single-byte-input (ascii... depending on locale), the unknown bytes will be replaced by questionmarks. If i take the first 4 characters of your string and copy-paste it into e.g. Notepad and then save it, the file becomes in hex:
52 3F 3F 3F 3F
what you have to do is find an editor which you can set to utf-8 before copy-pasting the text into it, then the saved file (of your first 4 characters) becomes:
52 E5 90 8C E6 97 B6 E4 B9 9F E8 A2 AB
This will then be recognized as valid utf-8 by [R].
I used "Notepad2" for trying this, but i am sure there are many more.
How can I view executed SQL stored procedures from a memory crash dump of a C# .net application?
The information you want might not be available any more. The objects representing connections, queries and results may already have been garbage collected.
If you know the name of the class that is used for a query, you can use windbg, load the sos extension and use the !dumpheap -type <classname> command to find those objects. Then use !do <address> to display the details of such an object. This might reveal the statement behind it.
If you're looking for a specific query, download the netext extension. It comes with the !wfrom command which allows you to select objects from the .NET heap based on criteria that you define.
This question has no trivial answer and many experienced debuggers had to face this issue one way of another. Getting the information for a single SQL command is tedious, but straightforward. For many it is difficult. I was helping a colleague on a situation like this and put together this solution. It requires NetExt (download the zip file): https://github.com/rodneyviana/netext/tree/master/Binaries
0:067> !windex
Starting indexing at 14:18:54 PM
1000000 objects...
2000000 objects...
3000000 objects...
4000000 objects...
Indexing finished at 14:19:04 PM
399,314,598 Bytes in 4,049,972 Objects
Index took 00:00:10
0:067> .foreach ({$addr} {!wfrom -type *.sqlcommand -nospace -nofield where (_commandType == 4) select _parameters._items._items}) { r #$t1= {$addr} ;!wfrom -type *.sqlcommand -nofield -nospace where (_parameters._items._items==$dbgeval("#$t1")) "\n", _commandText," [",$addr(),"]\n========================"; !wfrom -nospace -nofield -array {$addr} _parameterName,"=",$pp(_value) }
dbo.proc_getSiteIdOfHostHeaderSite [00000001011FCD90]
========================
#RETURN_VALUE=0n0
#HostHeader=sp.contoso.com
#RequestGuid={00000000-0000-0000-0000-000000000000}
proc_GetTpWebMetaDataAndListMetaData [0000000101D98DA0]
========================
#RETURN_VALUE=0n0
#WebSiteId={2815591d-55c8-4caf-842c-101d8807cb2a}
#WebId=NULL
#Url=
#ListId=NULL
#ViewId=NULL
#RunUrlToWebUrl=1
#DGCacheVersion=0n2
#SystemId=69 3a 30 29 2e 77 7c 73 2d 31 2d 35 2d 32 31 2d 31 33 38 35 31 37 34 39 39 32 2d 39 37 39 39 35 i:0).w|s-1-5-21-1385174992-97995 (...more...)
#MetadataFlags=0n18
#ThresholdScopeCount=0n5000
#CurrentFolderUrl=NULL
#RequestGuid={a00a0b30-1fcc-44d6-8346-ec20f8c49304}
proc_EnumLists [000000010236EDF0]
========================
#RETURN_VALUE=0n0
#WebId={a0a099b2-6023-4877-a7c1-378ec68df759}
#Collation=Latin1_General_CI_AS
#BaseType=NULL
#BaseType2=NULL
#BaseType3=NULL
#BaseType4=NULL
#ServerTemplate=NULL
#FMobileDefaultViewUrl=NULL
#FRootFolder=NULL
#ListFlags=0n-1
#FAclInfo=0n1
#Scopes=NULL
#FRecycleBinInfo=NULL
#UserId=NULL
#FGP=NULL
#RequestGuid={a00a0b30-1fcc-44d6-8346-ec20f8c49304}
(...)
I'm having problem with auctex in xemacs (I think). I get this error when I run load *.tex file in xemacs:
(1) (custom/warning) custom: widget custom-variable, option LaTeX-section-hook has no associated group
Here's the part of my init.el that's relevant:
(setq auto-mode-alist (append '(("\\.tex$" . LaTeX-mode)) auto-mode-alist))
(require 'tex-site)
The error goes away if I comment the second line above.
Here's my M-x TeX-submit-bug-report:
Emacs : XEmacs 21.5 (beta33) "horseradish" [Lucid] (i686-redhat-linux, Mule) of Tue Feb 5 2013 on buildvm-20.phx2.fedoraproject.org
Package: AUCTeX CVS-1.16 (-07--nil-nil)
current state:
==============
(setq
window-system 'x
LaTeX-version "2e"
TeX-style-path '("style/" "auto/"
"/usr/share/xemacs/xemacs-packages/etc/auctex/style/"
"/usr/share/xemacs/xemacs-packages/etc/auctex/auto/")
TeX-auto-save nil
TeX-parse-self nil
TeX-master t
)
I remove and reinstall auctex in xemacs and it now works. For the record the auctex that works with xemacs 21.5 is auctex 1.51.
I've been tasked with moving an old dynamic website from a Windows server to Linux. The site was initially written with no regard to character case. Some filenames were all upper-case, some lower-case, and some mixed. This was never a problem in Windows, of course, but now we're moving to a case-sensitive file system.
A with a quick find/rename command (thanks to another tutorial) got the filenames to all lowercase.
However, many of the URL references in the code still point to these mixed-case filenames, so I enabled mod_speling to overcome this issue. It seems to work OK for the most part, with the exception of one page: I have a file name haematobium.html, which, everytime a link points to .../haematobium.html, it gets rewritten as .../hæmatobium.html in the browser.
I don't know how this strange character made its way into the filename in the first place, but I've corrected the code in the HTML document to now link to haematobium.html, then renamed the haematobium.html file itself to match.
When requesting .../haematobium.html in Chrome, it "corrects" to .../hæmatobium.html in the address bar, and shows an error saying "The requested URL .../hæmatobium.html was not found on this server."
In IE9, I'm promted for the login (this is a .htaccess protected page), I enter it, and then if forwards the URL to .../h%C3%A6matobium.html, which again doesn't load.
In my frustration I even copied haematobium.html to both hæmatobium.html and hæmatobium.html, still, none of the three pages actually load.
So my question: I read somewhere that mod_speling tries to "learn" misspelled URLs. Does it actually rename files (is that where the odd character might have come from)? Does it keep a cache of what's been called for, and what it was forwarded to (a cache I could clear)?
PS. there are also many mixed-case references to MySQL database tables and fields, but that's a whole 'nother nightmare.
[Cannot comment yet, therefore answering.]
Your question doesn't make it entirely clear which of the two names (two characters ae [ASCII], or one ligature character æ [Unicode]) for haematobium.html actually exists in your Apache's file system.
Try the following in your shell:
$ echo -n h*matobium.html | hd
The output should be either one of the following two alternatives. This is ASCII, with 61 and 65 for a and e, respectively:
00000000 68 61 65 6d 61 74 6f 62 69 75 6d 2e 68 74 6d 6c |haematobium.html|
00000010
And this is Unicode, with c3 a6 for the single character æ:
00000000 68 c3 a6 6d 61 74 6f 62 69 75 6d 2e 68 74 6d 6c |h..matobium.html|
00000010
I would recommend using the ASCII version, it makes life considerably easier.
Now to your actual question. mod_speling does neither "learn", nor rename or cache its data. The caching is either done by your browsers, or by proxies in between your browsers and the server.
It's actually best practice to test these cases with command line tools like wget or curl, which should be already available or easily installable on any Linux.
Use wget -S or curl -i to actually see the response headers sent by your web server.