Audit trails and implementing HIPAA best practices [closed] - audit

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Are there any best practices for audit trail implementation for HIPPA starting with database design.

The HIPAA compliance requires access control, information integrity, audit control, user authentication and transmission security. Similarly as with other compliance regulations, it’s necessary to use software, hardware, or other methods that provide monitoring and capturing of user activities in information systems that contain or use electronic PHI. The security and integrity of electronic PHI must be ensured against any unauthorized access, modification, and deletion
“As required by Congress in HIPAA, the Privacy Rule covers:
• Health plans
• Health care clearinghouses
• Health care providers who conduct certain financial and administrative transactions electronically. These electronic transactions are those for which standards have been adopted by the Secretary under HIPAA, such as electronic billing and fund transfers"
To be able to meet the HIPAA requirements, the entity must constantly audit and report all access attempts and events related the databases and objects that contain sensitive PHI records
Depending on the structure of health institution entities, supervisors periodically perform verification of HIPAA compliance to ensure its effectiveness. The verification frequency depends on the last verification report, and it’s less frequent in case of previous or constant positive HIPAA compliance
The HIPAA act requirements do not strictly address methods for database and IT security. However, according to the regulation requirements on providing integrity, confidentiality, privacy, and availability of patient health information, the following steps provide compliance with HIPAA:
• Define and document the required permissions for each health institution employee
• Periodically review permission configurations on database objects and modify access rights in order to maintain the integrity, confidentiality, and accuracy of the PHI records
• Audit the system that keeps and provides use of the PHI records
• Analyze the audit information that show events related to the PHI records periodically, and take action where needed
The following general actions are recommended in order to comply with HIPAA regulations:
• A SQL Server environment that’s secure and controlled constantly. Provide SQL Server system security with continuous auditing of system events, whether the events are internal or external. Ensure this by enforcing strict rules unchangeable by unauthorized parties. Apply the rules to all SQL Server objects related to confidential PHI data (logins, databases, users, tables, etc.)
After the rules are set, audit and periodically analyze all events related to security - particularly pay attention to permission changes on the SQL Server objects, and access to databases/tables with PHI records
• Whatever the user origin is (internal or external), his/her actions must be monitored and documented in appropriate audit reports when related to database/table access permission changes. Administrative personnel actions must be documented as well – there must be no difference between regular users and administrators when it comes to auditing
• Use secure and officially verified hardware and software. Pay attention to common security configuration omissions, like default logins and passwords, that are often used by intruders in attack attempts
Modify all default system supplied security parameters on SQL Server. If possible, do not use the mixed mode (enables both Windows and SQL Server authentication), switch to the Windows authentication only. When used for accessing SQL Server, the Windows authentication ensures the Windows password policy - checking the password history, and the password length and life duration. The most important feature of the Windows password policy is the login lockout – it gets locked for further use after a number of consecutive failed logon attempts
• Any changes or tampering of captured audit information must be evident, whether it was done by an external or internal party. Tampering attempts monitoring is required in terms of compliance regulations, intrusion prevention, and potential security breach investigations

Related

Keeping information private, even from database users

I have a unique use case. I want to create a front-end system to manage employee pay. I will have a profile for each employee and their hourly rate stored for viewing/updates in the future.
With user permissions, we can block certain people from seeing pay in the frontend.
My challenge is that I want to keep developers from opening up the database and viewing pay.
An initial thought was to hash the pay against my password. I'm sure there is some reverse engineering that could be used to get the payout, but it wouldn't be as easy.
Open to thoughts on how this might be possible.
This is by no means a comprehensive answer, but I wanted at least to point out a couple of things:
In this case, you need to control security at the server level. Trying to control security at the browser level, using Javascript (or any similar frameword like ReactJs) is fighting a losing battle. It will be always insecure, since any one (given the necessary time and resources) will eventually find out how to break it, and will see (and maybe even modify) the whole database.
Also, if you need an environment with security, you'll need to separate developers from the Production environment. They can play in the Development environment, and maybe in the Quality Assurance environment, but by no means in the Production environment. Not even read-only access. A separate team controls Production (access, passwords, firewalls, etc.) and deploys to it -- using instructions provided by the developers.

Database schema, containing account information

everyone.
My current goal is to develop database structure for web application-based control system of internet-provider. It is a learning task with following requirements:
The administrator registers the subscriber in the system. System provides services list (Telephone, Internet, Cable TV, IP-TV etc) and different subscription plans for each service.
A subscriber can select one or more services with a certain subscription plans for each service. A subscriber has an account and can
replenish the balance. The funds from the account are removed, depending on the selected subscription plans. If the funds on account are insufficient, the system blocks the user.
The system administrator has the rights to:
add, delete or edit the subscription plans;
register, block or unblock the user.
I think, that all words, highlighted with bold are considered to be entities.
I developed following schema:
And now I have few questions:
Is it OK to have different tables for subscriber and account, or should they be merged(one subscriber can only have one account)?
Should current balance be stored as column in the account table, or should it be calculated each time?
Is it OK to have a table with only ID column?
Any critique and suggestions will be appreciated.
They must be merged, because your model allows links "one subscriber - many accounts".
It's depends on your use cases. For optimization you can add trigger on incoming_payment and update subscriber.balance. So you can very quickly build reports.
Table that contains only generated ID looks bad. In your model service must have property type (Telephone, Internet, Cable TV, IP-TV etc).

LDAP - write concern / guaranteed write to replicas prior to return

Is OpenLDAP (or are any of LDAP's flavors) capable of providing write concern? I know it's an eventually consistent model, but there's more then a few DB's that have eventual consistency + write concern.
After doing some research, I'm still not able to figure out whether or not it's a thing.
The UnboundID Directory Server provides support for an assured replication mode in which you can request that the server delay the response to an operation until it has been replicated in a manner that satisfies your desired constraints. This can be controlled on a per-operation basis by including a special control in the add/delete/modify/modify DN request, or by configuring the server with criteria that can be used to identify which operations should use this assured replication mode (e.g., you can configure the server so that operations targeting a particular set of attributes are subjected to a greater level of assurance than others).
Our assured replication implementation allows you to define separate requirements for local servers (servers in the same data center as the one that received the request from the client) and nonlocal servers (servers in other data centers). This allows you tune the server to achieve a balance between performance and behavior.
For local servers, the possible assurance levels are:
Do not perform any special assurance processing. The server will send the response to the client as soon as it's processed locally, and the change will be replicated to other servers as soon as possible. It is possible (although highly unlikely) that a permanent failure that occurs immediately after the server sends the response to the client but before it gets replicated could cause the change to be lost.
Delay the response to the client until the change has been replicated to at least one other server in the local data center. This ensures that the change will not be lost even in the event of the loss of the instance that the client was communicating with, but the change may not yet be visible on all instances in the local data center by the time the client receives the response.
Delay the response to the client until the result of the change is visible in all servers in the local data center. This ensures that no client accessing local servers will see out-of-date information.
The assurance options available for nonlocal servers are:
Do not perform any special assurance processing. The server will not delay the response to the client based on any communication with nonlocal servers, but a change could be lost or delayed if an entire data center is lost (e.g., by a massive natural disaster) or becomes unavailable (e.g., because it loses network connectivity).
Delay the response to the client until the change has been replicated to at least one other server in at least one other data center. This ensures that the change will not be lost even if a full data center is lost, but does not guarantee that the updated information will be visible everywhere by the time the client receives the response.
Delay the response to the client until the change has been replicated to at least one server in every other data center. This ensures that the change will be processed in every data center even if a network partition makes a data center unavailable for a period of time immediately after the change is processed. But again this does not guarantee that the updated information will be visible everywhere by the time the client receives the response.
Delay the response to the client until the change is visible in all available servers in all other data centers. This ensures that no client will see out-of-date information regardless of the location of the server they are using.
The UnboundID Directory Server also provides features to help ensure that clients are not exposed to out-of-date information under normal circumstances. Our replication mechanism is very fast so that changes generally appear everywhere in a matter of milliseconds. Each server is constantly monitoring its own replication backlog and can take action if the backlog becomes too great (e.g., mild action like alerting administrators or more drastic measures like rejecting client requests until replication has caught up). And because most replication backlogs are encountered when the server is taken offline for some reason, the server also has the ability to delay accepting connections from clients at startup until it has caught up with all changes processed in the environment while it was offline. And if you further combine this with the advanced load-balancing and health checking capabilities of the UnboundID Directory Proxy Server, you can ensure that client requests are only forwarded to servers that don't have a replication backlog or any other undesirable condition that may cause the operation to fail, take an unusually long time to complete, or encounter out-of-date information.
From reviewing RFC3384 discussion of replication requirements with respect to LDAP, it looks as though LDAP only requires eventual consistency and does not require transactional consistency. Therefore any products which support this feature are likely to do this with vendor specific implementations.
CA Directory does support a proprietary replication model called MULTI-WRITE which guarantees that the client obtains write confirmation only after all replicated instances have been updated. In addition it supports the standard X.525 Shadowing Protocol which provides lesser consistency guarantees and better performance.
With typical LDAP implementations, an update request will normally return immediately when the DSA handling this request has been updated, and not when the replica instances have been updated. This is the case with OpenLDAP I believe. The benefits are speed, the downsides are lack of guarantee that an updated has been applied to all replicas.
CA's directory product uses a Memory Mapped system and writes are so fast this is not a concern.

SSAE 16 website Audit

Our client need to get our website audited for security SSAE-16. I am not aware of much about this SSAE-16. So, my question is, What areas got covered in this audit? I read some where, that it mostly relevant for data centers. Do, they need to audit website too? If yes, then what is the process and requirements of auditing the website?
SSAE 16 is intended to report on controls relevant to a customer's financial statement audit. SSAE 16 has no pre-defined controls criteria like ISO 27001 or PCI-DSS. The controls that get tested are determined and defined by the service organization. SSAE 16 is NOT primarily intended for data centers, however, because of the history of SAS 70, many data centers still obtain an SSAE 16 report. There are other attestation reports that may be more appropriate depending on the type of service your website provides. I would recommend going to the AICPA website, http//www.aicpa.org/soc to get a better understanding of what you need, then talk to a reputable CPA firm for additional guidance.

Using XADatasource or non-XA Datasource for JTA based transactionsn in JPA

We are using JPA 1.0 for ORM based operations and we want to have JTA datasource for our application. We are having only 1 database to which our application will connect.
We start our transaction boundary in controller class and it goes till DAO layer controller--> BOImpl--> DAO.
In websphere application server admin console when I am defining datasource should I use non-XA datasource or XA-Datasource.
My understanding is that for single datasource I should not use XADatasource.
Please let me know what should I need to use.
For a single resource (like a single DB) you indeed do not need an XA-datasource.
On the other hand, bear in mind that most JTA/JTS implementations actually recognize that there is only 1 resource participating in a transactions, so the overhead for XA would be minimal or none then. There can also be additional participants in the transactions that you might now not think about, like sending JMS messages.
But if you're really sure you only have 1 resource participating, you can safely go for non-XA.
I hope your doubt may be clear by now, but here is more information on that just in case.
The typical XA resources are databases, messaging queuing products such as JMS or WebSphere MQ, mainframe applications, ERP packages, or anything else that can be coordinated with the transaction manager. XA is used to coordinate what is commonly called a two-phase commit (2PC) transaction. The classic example of a 2PC transaction is when two different databases need to be updated atomically. Most people think of something like a bank that has one database for savings accounts and a different one for checking accounts. If a customer wants to transfer money between his checking and savings accounts, both databases have to participate in the transaction or the bank risks losing track of some money.
The problem is that most developers think, "Well, my application uses only one database, so I don't need to use XA on that database." This may not be true. The question that should be asked is, "Does the application require shared access to multiple resources that need to ensure the integrity of the transaction being performed?" For instance, does the application use Java 2 Connector Architecture adapters or the Java Message Service (JMS)? If the application needs to update the database and any of these other resources in the same transaction, then both the database and the other resource need to be treated as XA resources.