We are implementing Sharepoint 2010 and the architecture we are planning to deploy is made by 1 WFE, 1 Application and 1 DB server.
Our customer policy requires however that the Web front end must be isolated from the DB, so the 2 servers will not be able to communicate directly (WFE will talk only to App, and App will talk to DB).
What are the implications of this kind of solution? Could we have issues with some of the services? I'm thinking specially about the reporting: Excel services, Performance Point and Reporting Services.
Thanks.
This is not possible. Every server in the farm must be able to talk to SQL. This is just a guess, but it sounds like they are looking to implement the WFE in a DMZ and then have SQL on the internal network? Planning for that kind of farm is beyond what I can explain in a post here (and probably beyond what you should be configuring if you weren't already aware of the typologies), but this link should give you a start on typology and security considerations for this setup. The one farm I have built that used this typology was very difficult to get setup and involved several meetings with the company firewall team.
http://technet.microsoft.com/en-us/library/cc263513(v=office.14).aspx
Related
I'm new to web development and just built my first website with .Net Core. It's primarily HTML, CSS, and JavaScript with a little C# for a contact form.
Without recommending any service providers (question will be taken down), how do I go about deploying the website? The more details the better as I have no idea what I'm doing haha.
Edit: I am definitely going to go with a service provider, however the business I am building the website for doesn't have a large budget so I want to find the best provider at the lowest cost.
Daniel,
As you suspect, this is a bit of a loaded question as there are so many approaches. One approach is to use App Services within Microsoft Azure. You can create a free trial Azure account to start that includes a 200.00 credit, which is more than enough to do all of this for free. Then, using the Azure Management Portal, create an App Service (also free) on an App Service Plan in a region that makes sense for you (i.e. US West). Once you do that, you can download what is called a Publish Profile from within the App Service's Management Portal in Azure.
If you're using Visual Studio, for example, you can then right click your project and "Publish" it (deploy to the cloud, or the App Service you just created). One option in that process is to import an Azure Publish Profile, which you can do with the one you just downloaded. This makes it really simple. The Publish Profile is really just connection information to your Azure App Service (open it in Notepad to see). It will chug for a bit and then publish and load the app for you. You can also get to the hosted version of your app by clicking the Url of the app in the App Service management portal on the main page.
This may be oversimplifying what you need to do, but this is a valid direction to take. AWS and others have similar approaches.
Again, tons of ways to do this, but this is a free approach. :-) I don't consider Azure a Service Provider in the sense that you asked us not to. Instead, I wanted to outline one turn-key approach with specific details on how to get there.
You can find specific steps in a lot of places, such as this link:
https://www.geeksforgeeks.org/deploying-your-web-app-using-azure-app-service/
DanielG's answer is useful, but you mentioned you don't want use any services from service provider.
Usually, there are only three ways to deploy the program,
first one is the app service provided by the service provider mentioned by DanielG,
**Benefits of using service provider products:**
1. Very friendly to newbies, follow the documentation to deploy the application in a few minutes.
2. It offers a very stable, scalable service that monitors the health of our website.
3. We can get their technical support.
**Shortcoming**
It is a paid service, and although Azure's service has a free quota, it will run out.
**Suggestion**
It is recommended that websites that are officially launched use the services of service providers.
second one is to use fixed IP for access (it seems that fixed iPv4 IP is not provided in network operations),
**Benefits of using fixed IP:**
If there is a fixed IP address, or if the carrier supports iPv6, we can deploy our website, and the public network can access it. And if you have domain, it also can support https.
**Shortcoming**
1. There are cybersecurity risks and are vulnerable to attack.
2. Without perfect website health monitoring, all problems need to be checked by yourself, and it is very troublesome to achieve elastic expansion.
**Suggestion**
It is generally not recommended because there is no fixed IP under normal circumstances. Broadband operators used to offer it, but now it doesn't.
If you are interested, you can try ipv6 to test.
the last one is to use tools such as ngrok or frp for intranet penetration.
**Benefits of using intranet penetration:**
Free intranet penetration services such as ngrok, the URL generated by each run is not fixed, and there are some limitations, such as a new URL will be generated after a certain period of time, which is enough for testing.
Of course you can purchase the service of this tool, which provides fixed URLs and supports https.
**Shortcoming (same as the second one)**
**Suggestion**
The functional implementation is the same as the second suggestion, and the physical devices used by the website are all their own. The intranet penetration tool (ngrok, or frp) solves the problem of not having a fixed IP, providing a URL that you can access.
There are few users and the demand for web services is not high, so it is recommended that individual users or small business users use ngrok and frp in this scenario. Generally suitable for OA use in small businesses.
I will soon be hosting my MVC site to an external provider (I am yet to finalize the hosting services company).
My website is developed using ASP.NET MVC 4 and it is using SQL Server 2008 as its database.
I will publish 3 applications under single domain:
MVC 4 external site - Public access
WCF service - Consumed by external site
MVC 4 internal site - Restricted access (admin and configuration
purpose)
There are few questions striking me at the moment -
How can I make my code secure so that it can't be refactored from
its DLLs?
How to make CSHTML (razor) views secure so that noone from the
hosting company can see its internals?
Finally, how to make SQL Server database secure so that no one in
hosting company can open it through SSMS?
These all questions are interrelated and so I have posted in a single question.
I am not sure, if anyone in hosting company can really bother about the code or database of their customers, but its just a security consideration.
Short answer is: You cannot. You need to have a minimum amount of trust toward your hoster. If you don't, find someone else.
The only way to have the kind of security you want would be to host the website yourself, on a machine where you yourself control physical access (i.e. on site). Next best solution would be to rent a root server, but even then you cannot lock out the hosting company reliably.
You can obfuscate the DLLs to make decompiling harder (but not impossible), but there is no way (that I know of) to do that with Razor views. I would not recommend doing that for a website either way. The database cannot be obfuscated like that by design, especially if it is on a shared server.
Long story short: If you run code or store data on a machine you do not own, you can no longer completely control access to it.
Many applications use the following model:
Browsers or other clients interact with application servers.
Application servers (web servers or RPC servers) interact with data store servers (SQL servers or non-SQL storage).
For internet applications, they need application servers because they must keep simple feature on data servers for performance. But I can't see why they need application servers on intranet.
For example, can we develop an Adobe AIR application, which directly connect to a PostgreSQL server? I guess we can deploy a center PostgreSQL server which has many stored procedures and set strict permission, and let the Adobe AIR application fetch (and modify) data only by invoking the stored procedure.
Why don't the most of applications choose a simplier solution?
In general, there is no reason why you couldn't get an independent application to talk to a PostgreSQL server directly. Some applications do this and it works fine.
I'm not familiar enough with Adobe AIR to say whether it's possible in this context. In principle, if you can get a PostgreSQL driver, or if you can write your own using TCP sockets (the PostgreSQL network protocol is documented in details in the official documentation), you could certainly connect directly.
This being said, having a form of application server between the end-client and the database server isn't purely for performance.
Web-based development allows the SQL queries to be controlled by the server. Instead of exposing complete SQL access, you expose the features that the client can use. If you need to tweak the queries later (bug, change of data structure, ...), you can do this rather centrally on your application server, without having the need to deploy a new version of the client to each user.
Of course, you can do some abstraction like this user server programming directly, but this isn't suitable for all applications. This may depend on what other features your application needs, for example if it needs to make use of a library programmed in another language. You can use some procedural languages bindings, but it's not always suitable: pl/Python is an "untrusted" language (which may cause security problems) and pl/Java needs a external add-on, for example.
In addition, not all applications are ultimately reserved for intranet usage nowadays. It often makes sense not to restrict yourself to intranet usage when you start designing an application.
I initially started with a direct access design and quickly found it useful to move to an application server where I talked to the DB via web services. Reasons included:
Handling DB restart, local connection loss, client IP address change, etc is much easier when you're talking to the DB over a stateless protocol like HTTP. This is more of an issue for remote workers.
Transactions are clearly demarcated and isolated in server-side transactional methods (I used EJB3 and container managed transactions)
It's much easier to add new clients like a phone app as they can share more of the code and business logic. Stored procedures in the database are very useful, but can be limited and occasionally frustrating.
Some tools/languages don't have built-in tools for talking to PostgreSQL directly, but can easily talk to a RESTful web service with XML or JSON request/response format.
DB admin is easier if you're dealing only with a single application server connection pool
The main downside is of course the extra layer means extra work and extra maintenance.
You can, but...
Browser languages/libraries tend to have poor database support
What happens when someone wants to use this application remotely?
If you're not talking about browser-based applications, then that is exactly what many do. There are plenty of traditional installed client applications talking to a backend database either directly or via a wrapper (odbc/jdbc).
we are currently on windows server 2008 R2, IIS 7.5 and we are going to open some of our data via WCF services.
To do that, we are planing to host our services on IIS but I heard that it is not a good idea for WCF services.
The problem with the WAS is that it is general purpose hosting engine. it's actually unaware that it's actually hosting a WCF service or a website (as far as I know)
I heard that we can install an extension to the WAS called the Windows Server AppFabric.
does anybody have any experience on
AppFabric?
should my app have to use so called
'Service Bus' to use AppFabric?
should I go ahead and definitely
install it?
at most basic level, how and where
can I install it? does it require
any licence?
Thanks in advance.
I don't think IIS us a bad idea - many developers use IIS to host their WCF services. IMHO you'd only use what you need, so if all you need is a hosting framework, then IIS is a very good option for WCF services. It is (almost) unaware that it's hosting a WCF service, but that in the majority of the cases isn't an issue.
Windows Server AppFabric as it's currently released provides three capabilities: a distributed caching system (so if you need to scale out your service you can use this cache to share state among the nodes); a packaging / deployment interface (in which you can package a project and deploy it a little easier in IIS); and a management / monitoring interface (where you can monitor the instances of WCF and Workflow services which are running in your machine).
Answers to your questions:
Yes, some people have experience with it :)
No, the application doesn't have to use it. You'd only use the ServiceBus if you need its functionality (relay)
Only if you need it. If you don't need caching or the monitoring capabilities, for example, then I'd say you don't need it. I've found in the past that the least number of components I have in my system, the less likely it is to break.
Go to http://msdn.microsoft.com/en-us/windowsserver/ee695849.aspx. And AFAIK you don't need any license, but you can check on the download page to see if it has more information.
There is no real common reason why not to host a service in IIS/WAS.
If you want to absolutely, totally 100% make sure that your service is continuously running some process, such as a continuous loop or polling monitor, and if any interruption no matter how brief is a major issue, then you'd want to look at alternative hosts.
Win Server AppFabric is most useful for WF Service hosting and caching. Note however that Win Server AppFabric + Win Server Service Bus 1.0 represents the first steps in convergence between the Azure platform and the Windows Server private platform.... In other words, whichever of the two ways you choose, that's what is going to be earning your bread and butter in 5 years time.
I’m developing a .NET/C# application software for an instrument which has a built-in PC (Core 2 CPU/2.66GZ/4GB RAM) and will have access to the Internet from behind the facility IT firewall. The software is made up of two parts: a rich client desktop app for UI and device control and a web app (silverlight) for providing remote maintenance such as device configuration and calibration via internet using browser. This device web site will be hosted using IIS locally on the instrument. My questions are:
What is the risk of running an IIS hosted web site on a device?
What does it take to make it secure so that data and operation of the instrument is immune to potential hackers.
Is it a better design to provide web services (or WCF services) as the interface for remote maintenance? In this case, I’ll create a rich client service utility program that can consume the web services over Internet for remote maintenance purpose.
Wow, thats an interesting project!
Personally I would take a different approach and have the device/instrument pull the maintenance info from a centralized server instead of hosting the service that performs it.
Do you really want to worry about the maintenance of updates & patches on that device.
but Ill try to answer like you didn't have any choice.
1) the risks are the same as any website. you have to deal with authentication, in your case I would have allowed IP ranges.. etc.
2) Nothing is immune. But just google WCF security for a start.
3) Yes that is a better approach if the services are hosted outside the "instrument"
good luck, sounds like a fun one.
See the WCF Developer Center for much information on WCF.
One feature of WCF is that it's possible to host a WCF service in almost any kind of program. In particular, you could host a secure WCF service on your device - without needing to run IIS or any other web server at all.