Custom language server: how to get the client to send *all* files to the server, not just those opened/edited by the user? - vscode-extensions

I'm working on implementing a custom language server and a VSCode language extension. My starting point for the client side is lsp-sample. My server implementation is entirely from scratch, in a different language (not JS).
Currently, I've successfully set up textDocument/didOpen and textDocument/didChange messages to be sent by the client and received by the server. However, I'm having trouble figuring out how to synchronize all files in the VSCode workspace, not just those that the user has opened. I can't find where this is supported in the protocol. The only text document synchronization capabilities I see are for documents opening, closing, and edits. What about all the other documents in the workspace?
For example, in order to handle "goto definition" requests, the server needs to know about definitions in other files, perhaps those that have never been opened or edited by the user.
A hacky solution would be, on the server side, to parse the URI of the workspace and just go load a bunch of files manually. But this seems like something that the LSP should support; perhaps I'm just missing where it's documented. (Also, it feels like I would be violating the spirit of the LSP design to do some covert ops like this behind the scenes, without communicating with the client.)

Perhaps you're looking for
DidChangeWatchedFiles Notification
See Specification
The watched files notification is sent from the client to the server
when the client detects changes to files and folders watched by the
language client (note although the name suggest that only file events
are sent it is about file system events which include folders as
well). It is recommended that servers register for these file system
events using the registration mechanism. In former implementations
clients pushed file events without the server actively asking for it.
Basically the client is responsible of watching the files you require and sends a notification to the server each time something changes on them. At this point the server is capable to load them.

Related

How to automate changing SteamVR settings with code

For a VR project I am working on I need to be able to make a in-game interface that would allow to change the role of Vive Trackers, ideally without restarting.
However it seems you can only do it manually through the steamvr settings windows.
During my diging I learned that steam is using WebHelpers(a custom browser) to render the settings and communicate with the software through http (not secured) requests and websockets.
I used wireshark to spoof the packets send and managed to reprocude them using python, however there is a secret key used (x-steam-secret) that changes at every steam or computer startup. I didn't find any way to fetch that key which makes sense since it's secret. I understand that it prevents the user form having config or actions made from any rogue program but I as admin of the computer cant either.
import requests
headers = { "x-steamvr-secret": "13294285527328607850" } # Changes at startup
r = requests.post(
'http://127.0.0.1:27062/input/settrackerbinding.action',
data=b'{"device_path":"/devices/htc/vive_trackerLHR-SERIAL_NUMBER","role":"TrackerRole_Camera","controller_role":"TrackedControllerRole_OptOut"}',
headers=headers, timeout=5,
)
I also found a way to change the file where my specific config are store, either manually or through a webconsole that i can replicate without the secret key, however I need to restart SteamVR to have those changes applied and that would mean restarting my VR program.
Do you guys have any idea how to automate some settings changes (with admin right if needed), or even better how to force SteamVR to reload its vrconfig file?
Thanks in advance and have a good day !

SSL Proxy / Decryption?

One of my clients just received the software ordered from his chosen developers, asked me to look at it and prepare the hosting procedures.
It's an Java (jar) app, so far so good ... but I saw something suspect, every 60 minutes or so the software connects to a remote host :443 port using SSL and transferring ~ 3-10 MB of encrypted data (as POST) then closes the connection, this is very strange. Tried to wireshark it but everything is encrypted and I have no clue about what kind of data is transferred, I know only the destination hostname. The hosted data within the app will be highly sensitive (insurance-broker) and if my client decides to go with it - this is a serious issue for his business and also for his clients, I've asked the developer company about this and they said that no one added something like this even if I provided them the proff (pcap).
I can block it within firewall, but if they added something like this it could exist another hosts ready to receive the encrypted data.
The only way I can figure it out is to somehow decrypt the SSL traffic in order to read RAW data and give my client all the needed informations in order to talk with the developer company to sort it out, how can I do that ? With some sort of ssl-proxy or whatever ... tried to google it but didn't find any kind of relevant tutorials.
I have access to the physical machine which is running the Java application, I can see every single bit of the traffic but ... encrypted.
If I was in your place instead of trying to decrypt ssl connection would have tried following steps:
1)Since you are aware of the host to which it is making a post request , find out more about that service so as to learn what it does ? May be try contacting that site saying that we need to consume your service what should I send my in post request ;)
2)Second way around would be if you can decompile the jar file and find line in the source code which makes that request and then you could go back to the developer asking as why this has been written. To find the source code which is making the call what you could do is block the host access on your firewall.
The code would fail and mostly probably he would have logged the exception in his log files. Find the stack trace and you will know the line of code that is
making that request.
Hope this helps.

What was the evolution of interaction paradigm between web server program and content provider program?

In my opinion, web server is responsible to deliver content to client. If it is static content like pictures and static html document, web server just deliver them as bitstream directly. If it is some dynamic content that is generated during processing client's request, the web server will not generate the conetnt itself but call some external proram to genearte the content.
AFAIK, this kind of dynamice content generation technologies include the following:
CGI
ISAPI
...
And from here, I noticed that:
...In IIS 7, modules replace ISAPI
filters...
Is there any others? Could anyone help me complete the above list and elabrate on or show some links to their evolution? I think it would be very helpful to understand application such as IIS, TomCat, and Apache.
I once wrote a small CGI program, and though it serves as a content generator, it is still nothing but a normal standalone program. I call it normal because the CGI program has a main() entry point. But with the recenetly technology like ASP.NET, I am not writing complete program, but only some class library. Why does such radical change happens?
Many thanks.
well, the biggest missing piece in your question is that you can have the webserver generating the content dynamically as well. This is common with most platforms outside of PHP and Perl. You often set that website behind apache or nginx used as a proxy, but it doesn't "call an external progam" in any reasonable sense, it forwards the http request to the proxied server. This is mostly done so you can have multiple sites on the same server, and also so you can have apache/nginx protect you against incorrect requests.
But sure, we can, for the sake of the question, say that "proxying" is a way to call an external program. :-)
Another way to "call the external program" is Pythons WSGI, where you do call a permanently running server. So again you don't start an external program, it's more like calling the module in ASP (although it's a separate program, not a module, but you don't start it with every request, you use an API).
The change from calling external programs as in CGI to calling modules like in ASP.NET, process with WGI or proxying to another webserver happened because with CGI you have to start a new prpogram for each request. The PERL/PHP interpreter needs to be laoded into memory, and all modules they use as well. This quickly becomes very heavy and process/memory intensive.
Therefore, to be able to use bigger systems that are permanently running, other techniques have been developed. Most of them are platform/language dependent, and the only one that is platform independent is really to make a complete webserver and then use apache/nginx as a proxy in front (in which case the apache/nginx strictly isn't necessary any more).
I hope this cleared things up a bit.
fastcgi and wsgi are two more interfaces content generators can use to talk to a webserver -- the reason more recent interfaces aren't complete programs is that forking and executing things that expect to be executables is costly.
OTOH, writing your little generator in such a way that it doesn't leak anything between invocations is harder than having the liberty to just exit at the end (and rely on environment variables and command line arguments like a normal executable).
This is all for performance reasons, but then you have more complicated content generators and process management in the webservers.

Where are the best locations to write an error log in Windows?

Where would you write an error log file, say ErrorLog.txt, in Windows? Keep in mind the path would need to be open to basic users for file write permissions.
I know the eventlog is a possible location for writing errors, but does it work for "user" level permissions?
EDIT: I am targeting Windows 2003, but I was posing the question in such a way as to have a "General Guideline" for where to write error logs.
As for the EventLog, I have had issues before in an ASP.NET application where I wanted to log to the Windows event log, but I had security issues causing me heartache. (I do not recall the issues I had, but remember having them.)
Have you considered logging the event viewer instead? If you want to write your own log, I suggest the users local app setting directory. Make a product directory under there. It's different on different version of Windows.
On Vista, you cannot put files like this under c:\program files. You will run into a lot of problems with it.
In .NET, you can find out this folder with this:
Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData)
And the Event Log is fairly simple to use too:
http://msdn.microsoft.com/en-us/library/system.diagnostics.eventlog.aspx
Text files are great for a server application (you did say Windows 2003). You should have a separate log file for each server application, the location is really a matter of convention to agree with administrators. E.g. for ASP.NET apps I've often seen them placed on a separate disk from the application under a folder structure that mimics the virtual directory structure.
For client apps, one disadvantage of text files is that a user may start multiple copies of your application (unless you've taken specific steps to prevent this). So you have the problem of contention if multiple instances attempt to write to the same log file. For this reason I would always prefer the Windows Event Log for client apps. One caveat is that you need to be an administrator to create an event log - this can be done e.g. by the setup package.
If you do use a file, I'd suggest using the folder Environment.SpecialFolder.LocalApplicationData rather than SpecialFolder.ApplicationData as suggested by others. LocalApplicationData is on the local disk: you don't want network problems to stop you from logging when the user has a roaming profile. For a WinForms application, use Application.LocalUserAppDataPath.
In either case, I would use a configuration file to decide where to log, so that you can easily change it. E.g. if you use Log4Net or a similar framework, you can easily configure whether to log to a text file, event log, both or elsewhere (e.g. a database) without changing your app.
The standard location(s) are:
C:\Documents and Settings\All Users\Application Data\MyApp
or
C:\Documents and Settings\%Username%\Application Data\MyApp
(aka %UserProfile%\Application Data\MyApp) which would match your user level permission requirement. It also separates logs created by different users.
Using .NET runtime, these can be built as:
AppDir=
System.Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData)
or
AppDir=
System.Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData)
followed by:
MyAppDir = IO.Path.Combine(AppDir,'MyApp')
(Which, hopefully, maps Vista profiles too).
Personally, I would suggest using the Windows event log, it's great. If you can't, then write the file to the ApplicationData directory or the ProgramData (Application Data for all users on Windows XP) directory.
The Windows event log is definitely the way to go for logging of errors. You're not limited to the "Application" log as it's possible to create a new log target (e.g. "My Application"). That may need to be done as part of setup as I'm not sure if it requires administrative privileges or not. There's a Microsoft example in C# at http://support.microsoft.com/kb/307024.
Windows 2008 also has Event Log Forwarding which can be quite handy with server applications.
I agree with Lou on this, but I prefer to set this up in a configuration file like Joe said. You can use
file value="${APPDATA}/Test/log-file.txt"
("Test" could be whatever you want, or removed entirely) in the configuration file, which causes the log file to be written to "/Documents and Settings/LoginUser/Application
Data/Test" on Windows XP and to "/Users/LoginUser/AppData/Roaming/Test on Windows Vista.
I am just adding this as I just spent way too much time figuring how to make this work on Windows Vista...
This works as-is with Windows applications. To use logging in web applications, I found Phil Haack's blog entry on this to be a great resource:
http://haacked.com/archive/2005/03/07/ConfiguringLog4NetForWebApplications.aspx
%TEMP% is always a good location for logs I find.
Going against the grain here - it depends on what you need to do. Sometimes you need to manipulate the results, so log.txt is the way to go. It's simple, mutable, and easy to search.
Take an example from Joel. Fogbugz will send a log / dump of error messages via http to their server. You could do the same and not have to worry about the user's access rights on their drive.
I personally don't like to use the Windows Event Log where I am right now because we do not have access to the production servers, so that would mean that we would need to request access every time we wanted to look at the errors. It is not a speedy process unfortunately, so your troubleshooting is completely haulted by waiting for someone else. I also don't like that they kind of get lost within the ones from other applications. Sure you can sort, but it's just a bit of a nucance scrolling down. What you use will end up being a combination of personal preference coupled along with limitations of the enviroment you are working in. (log file, event log, or database)
Put it in the directory of the application. The users will need access to the folder to run and execute the application, and you can check write access on application startup.
The event log is a pain to use for troubleshooting, but you should still post significant errors there.
EDIT - You should look into the MS Application Blocks for logging if you are using .NET. They really make life easy.
Jeez Karma-killers. Next time I won't even offer a suggestion when the poster puts up an incomplete post.

Accessing a resource file from a filesystem plugin on SymbianOS

I cannot use the Resource File API from within a file system plugin due to a PlatSec issue:
*PlatSec* ERROR - Capability check failed - Can't load filesystemplugin.PXT because it links to bafl.dll which has the following capabilities missing: TCB
My understanding of the issue is that:
File system plugins are dlls which are executed within the context of the file system process. Therefore all file system plugins must have the TCB PlatSec privilege which in turn means they cannot link against a dll that is not in the TCB.
Is there a way around this (without resorting to a text file or an intermediate server)? I suspect not - but it would be good to get a definitive answer.
The Symbian file server has the following capabilities:
TCB ProtServ DiskAdmin AllFiles PowerMgmt CommDD
So any DLL being loaded into the file server process must have at least these capabilities. There is no way around this, short of writing a new proxy process as you allude to.
However, there is a more fundamental reason why you shouldn't be using bafl.dll from within a fileserver plugin: this DLL provides utility functions which interface to the file servers client API. Attempting to use it from within the filer server will not work; at best, it will lead to the file server deadlocking as it attempts to connect to itself.
I'd suggest rethinking that you're trying to do, and investigating an internal file-server API to achieve it instead.
Using RFs/RFile/RDir APIs from within a file server plugin is not safe and can potentially lead to deadlock if you're not very careful.
Symbian 9.5 will introduce new APIs (RFilePlugin, RFsPlugin and RDirPlugin) which should be used instead.
Theres a proper mechanism for communicating with plugins, RPlugin.
Do not use RFile. I'm not even sure that it would work as the path is checked in Initialise of RFile functions which is called before the plugin stack.
Tell us what kind of data you are storing in the resource file.
Things that usually go into resource files have no place in a file server plugin, even that means hardcoding a few values.
Technically, you can send data to a file server plugin using RFile.Write() but that's not a great solution (intercept RFile.Open("invalid file name that only your plugin understands") in the plugin).
EDIT: Someone indicated that using an invalid file name will not let you send data to the plugin. hey, I didn't like that solution either. for the sake of completness, I should clarify. make up a filename that looks OK enough to go through to your plugin. like using a drive letter that doesn't have a real drive attached to it (but will still be considered correct by filename-parsing code).
Writing code to parse the resource file binary in the plugin, while theoratically possible, isn't a great solution either.