Friday, 24 April 2015

What is Central Management Store and how CMS replication works

What is Central Management Store

1. It is a repository (SQL Database) to store topology, configuration and policies as XML documents.

Topology - Topology information generated from topology builder
Configuration - All the Lync server settings
Polices - All the different policies like dial plan, client and conferencing

2. We can modify the CMS either using Topology builder, Lync server management shell or Lync server control panel.
3. The access to CMS is provided to the above components through a DLL called Microsoft.Rtc.Management.Core.dll.
4. CMS is not only stores data, it will also validate the information before publishing it to the database.
5. It operatees in a single master and multiple replica system. There will be only one master CMS and all servers ruling Lync will have a local replica. All updates to information in CMS takes place at the master. The information is then replicated to all replicas
6. Lync servers will always read information from the local replica. This ensure the resilience when the master or the network to the master is not available.



CMS is implemented using three Windows services:

  • Lync Server Master Replicator Agent (MASTER)
  • Lync Server File Transfer Agent (FTA)
  • Lync Server Replica Replicator Agent (REPLICA).

Note : All three services run on CMS master and only REPLICA runs on the all replica servers.

How CMS replication works

1. Process of transferring information updates from master to replica is called “Replication”
2. Below are the three process takes place during the replication cycle

a. Copying information from the master to the replicas.
b. Applying the changes received to the replica.
c. Reports the status back to master.

CMS Master directory structure

3. CMS master uses file share to store master data. path would be <Lync Server FileStore>\<CMS Service Id>\CMSFileStore\xds-master, where <Lync Server FileStore> is the name of the directory selected to be the FileStore 
4. xds-master folder contains two sub-folders called replicas and working
5. The replicas folder contains a sub-folder for each of the replicas. For Eg. if you have 8 FE servers in your topology, you can see 8 folders under replicas. Folder name would be the server FQDN.
6. Within each replica folder there are two sub-folders named from-replica and to-replica





Replica directory structure

7. Each replica uses a directory structure in the file share, xds-replica (\\<replica>\xds-replica), to synchronize with the CMS master, where <replica> is the FQDN of the replica
8. xds-replica contains three folders named from-master, to-master and working



Replication flow

9. Each 60 seconds a task is run to determine if a change has been made to the CMS master and needs to be replicated.
10. All changes made since the last time the task was run are batched together into a data package – data.zip . The data package also contains meta-data. MASTER service is responsible for generating the data package.
11. For fast replication the size of the data.zip is less than 100 kb.
12. MASTER service stores data package to to-replica folder for every replica. Lync servers except Edge will use SMB protocol to push the data package. Edge server will use HTTPS channel through 4443 port.
13. FTA on CMS master will be alerted once MASTER service places the data package on to-replica folder and it will copy the data package to all replica servers (on working folder).
14. REPLICA service on the replica server copies the file to from-master folder, extracts and applies the changes to local CMS replica. After applying the CMS changes the REPLICA generates a status package – status.zip. 
15. The REPLICA service will place the status package in the to-master directory.
16. FTA service running on CMS master will copy the status package to from-replica folder on CMS master flex share.
17. MASTER service will process the status package and update the CMS master.

Important

Lync 2013 uses "Loosely coupled backend database", which means, there is no real time data will be stored on the backend database. Instead, it will store in the local CMS database and then replicate to the backend.

Tuesday, 21 April 2015

Safety Net in Exchange 2013




What is Transport Dumpster

It is a feature introduced from Exchange 2007 and designed to minimise the data loss during mail delivery to a replicated mailbox databases (CCR,LCR and DAG) in a lossy failover scenario. 

Transport dumpster settings in Exchange 2007

MaxDumpsterSizePerDatabase : Defines the size available for each storage group on the Hub Transport server. The recommendation is that this be set to 1.5 times the maximum message size limit within your environment. The default value for this setting is 18 MB.

MaxDumpsterTime : Defines the length of time that a message remains within the transport dumpster if the dumpster size limit is not reached. The default is seven days.

If either the time or size limit is reached, messages are removed from the transport dumpster by order of first in, first out. we can run the following command to see the current settings:

Changes in Exchange 2010

In Exchange 2010, the transport dumpster will remove the message when it got replicated to all database copies. This keeps the transport dumpster queue smaller by maintaining only copies of messages whose transactions logs haven't yet been replicated.

Changes in Exchange 2013

1. Transport dumpster is now “Safety Net”
2. It doesn’t require replicated mailbox databases. It will work for the mailbox databases which is not part of DAG and also with PF databases.
3. Safety Net itself is not redundant to avoid single point of failure. It will have a shadow safety Net and resubmits messages when the primary safety Net is not responding for 12 hrs.

How Safety Net works

1. Safety Net works closely with Shadow redundancy. 
2. The Primary Safety Net exists on the Mailbox server that held the primary message before the message was successfully processed by the Transport service. 
3. After the primary server processes the primary message, the message is moved from the active queue into the Primary Safety Net on the same server.
4. The Shadow Safety Net exists on the Mailbox server that held the shadow message.
5. After the shadow server determines the primary server has successfully processed the primary message, the shadow server moves the shadow message from the shadow queue into the Shadow Safety Net on the same server.

Message resubmission from Safety Net

Message resubmissions from Safety Net are initiated by the Active Manager component of the Microsoft Exchange Replication service that manages DAGs and mailbox database copies. No manual actions are required to resubmit messages from Safety Net. 

After the automatic or manual failover of a mailbox database in a DAG.
After you active a lagged copy of a mailbox database.

Note : Main requirement for successful resubmission from Safety Net for a lagged copy is the amount of time messages are stored in Safety Net must be greater than or equal to the lag time of lagged copy of the mailbox database.

Message resubmission from Shadow Safety Net

Scenario 1

1. Active Manager requests a resubmission of messages from Safety Net for a mailbox database for the time interval 5:00 to 9:00. However, the Mailbox server that holds the Primary Safety Net has crashed due to a hardware failure. Active Manager repeatedly tries to contact the Primary Safety Net for 12 hours.

2. After 12 hours, Active Manager sends a broadcast message to the Transport service on all Mailbox servers in the transport high availability boundary looking for other Safety Nets that contain messages for the target mailbox database for the time interval 5:00 to 9:00. The Shadow Safety Net responds are resubmits messages for the mailbox database for the time interval 5:00 to 9:00.

Scenario 2

1. The queue database on Mailbox server that holds the Primary Safety Net is corrupt, and a new queue database is created at 7:00. All of the primary messages stored in the Primary Safety Net from 1:00 to 7:00 are lost, but the server is able to store copies of successfully delivered messages in Safety Net starting at 7:00.

2. Active Manager requests a resubmission of messages from Safety Net for a mailbox database for the time interval 1:00 to 9:00.

3. The Primary Safety Net resubmits messages for the time interval 7:00 to 9:00.

4. The Primary Safety Net sends a broadcast message to the Transport service on all Mailbox servers in the transport high availability boundary looking for other Safety Nets that contain messages for the target mailbox database for the time interval 1:00 to 7:00 for which the Primary Safety Net has no message. The Shadow Safety Net generates a second resubmit request on behalf of the Primary Safety Net for resubmitting the shadow messages for the target mailbox database for the time interval 1:00 to 7:00.

Important

1. All delivery status notifications (DSNs) and non-delivery reports (NDRs) are suppressed for Safety Net resubmits.

2. Users removed from a distribution group may not receive a resubmitted message when the Shadow Safety Net resubmits the message. For example, a message is sent to a group containing User A and User B, and both recipients receive the message. User B is subsequently removed from the group. Later, a resubmit request from Primary Safety Net is made for the mailbox database that holds User B's mailbox. However, the Primary Safety Net is unavailable for more than 12 hours, so the Shadow Safety Net server responds and resubmits the affected message. During resubmission when the distribution group is expanded, User B isn't a member of the group, and won't receive a copy of the resubmitted message.

3. With the same logic new Users added to a distribution group may receive an old resubmitted message when the Shadow Safety Net resubmits the message. 


4. By default Safety Net keeps the messages for 2 days. There is no size limit like in the previous exchange servers. When we run Get-TransportConfig we can still see the MaxDumpsterSizePerDatabase and MaxDumpsterTime parameters. Both these parameters are only used by Exchange 2010 and not 2013.

Monday, 20 April 2015

Exchange 2013 Shadow Redundancy



Shadow redundancy is a feature introduced from exchange 2010 to provide redundancy for messages for the entire time they're in transit. The solution involves a technique similar to the transport dumpster. With shadow redundancy, the deletion of a message from the transport databases is delayed until the transport server verifies that all of the next hops for that message have completed delivery. If any of the next hops fail before reporting back successful delivery, the message is resubmitted for delivery to that next hop. 

Improvement in Exchange 2013

The major improvement to shadow redundancy in Microsoft Exchange Server 2013 is that the transport server now makes a redundant copy of any messages it receives before it acknowledges successfully receiving the message back to the sending server. The sending server's support or lack of support for shadow redundancy doesn't matter. This helps to ensure that all messages in the Exchange 2013 transport pipeline are made redundant while they're in transit.

Shadow redundancy components
Transport high availability boundary
DAG for DAG environment and AD site for non DAG environment
Primary message
The original message submitted to transport for delivery
Shadow message
The redundant copy of the message that the shadow server retains until it confirms the primary message was successfully processed by the primary server.
Primary server
The transport server that's currently processing the primary message.
Shadow server
The transport server that holds the shadow message for the primary server. A transport server may be the primary server for some messages and the shadow server for other messages simultaneously.
Shadow queue
The delivery queue where the shadow server stores shadow messages. For messages with multiple recipients, each next hop for the primary message requires separate shadow queues.
Discard status
The information a transport server maintains for shadow messages that indicate the primary message has been successfully processed.
Discard notification
The response a shadow server receives from a primary server indicating a shadow message is ready to be discarded.
Safety Net
The Exchange 2013 improved version of the transport dumpster. Messages that are successfully processed or delivered to a mailbox recipient by the Transport service on a Mailbox server are moved into Safety Net. For more information, see Safety Net.
Shadow Redundancy Manager
The transport component that manages shadow redundancy.
Heartbeat
The process that allows primary servers and shadow servers to verify the availability of each other.
  
How shadow message created

When receiving emails from outside the boundary

  1. An SMTP server transmits a message to the Transport service on a Mailbox server. The Mailbox server is the primary server, and the message is the primary message.
  2. While the original SMTP session with the SMTP server is still active, the Transport service on primary server opens a new, simultaneous SMTP session with the Transport service on a different Mailbox server in the organization to create a redundant copy of the message.
    • If the primary server is a member of a DAG, the primary server connects to a different Mailbox server in the same DAG. If the DAG spans multiple Active Directory sites, a Mailbox server in a different Active Directory site is preferred by default. This setting is controlled by the ShadowMessagePreference parameter on the Set-TransportService cmdlet. The default value is PreferRemote, but you can change it to RemoteOnly or LocalOnly.
    • If the primary server isn't a member of a DAG, the primary server connects to a different Mailbox server in the same Active Directory Site, regardless of the value of the ShadowMessagePreference parameter.
  3. The primary server transmits a copy of the message to the Transport service on other Mailbox server, and Transport service on the other Mailbox server acknowledges that the copy of the message was created successfully. The copy of the message is the shadow message, and the Mailbox server that holds it is the shadow server for the primary server. The message exists in a shadow queue on the shadow server.
  4. After the primary server receives acknowledgement from the shadow server, the primary server acknowledges the receipt of the primary message to the original SMTP server in the original SMTP session, and the SMTP session is closed.

When sending emails to outside the boundary

When an Exchange 2013 transport server transmits a message outside the transport high availability boundary, and the SMTP server on the other side acknowledges successful receipt of the message, the transport server moves the message into Safety Net.

How shadow redundancy works

1. After successful creation of a shadow message, the primary and shadow server need to stay in contact to track the progress of the message.
2. When the primary server successfully transmits the message to the next hop, and the next hop acknowledge the receipt, the primary server updates the status of “Discard Status” as delivery complete.
3. Shadow server opens SMTP session (which is nothing but heartbeat) with primary server to determine the discard status of the shadow message. 
4. Primary server responds with the discard status and then shadow messages will be removed or moved to safety Net.
5. If the shadow server can’t open SMTP session with primary server, the shadow server promotes itself as a primary server, promotes shadow message to primary message, and submits to next hop.

Shadow redundancy manager

Shadow Redundancy Manager is the core component of an Exchange 2013 transport server that's responsible for managing shadow redundancy.

  • The shadow server for each primary message being processed.
  • The discard status to be sent to shadow servers.
  • Maintaining the list of primary servers for each shadow message.
  • Comparing the original database ID and the current database ID of the queue database where the primary copy of the message is stored.
  • Checking the availability of each primary server for which a shadow message is queued.
  • Processing discard notifications from primary servers.
  • Removing the shadow messages from the shadow queues after all expected discard notifications are received.
  • Deciding when the shadow server should take ownership of shadow messages, becoming a primary server.

Message processing after an outage

1. Transport server comes back with new database - When the transport server becomes unavailable, each server that has shadow messages queued for that server will assume ownership of those messages and resubmit them. The messages then get delivered to their destinations.
2. Transport server comes back with old database - After the server comes back online, it will deliver the messages in its queues, which have already been delivered by the servers that hold shadow copies of messages. This will result in duplicate delivery of these messages. Exchange mailbox users won't see duplicate messages due to duplicate message detection. However, recipients on non-Exchange messaging systems may receive duplicate copies of messages.

Important

Shadow redundancy can't protect messages in transit, during and simultaneous failure of two or more transport servers, in a single server topology.



Friday, 17 April 2015

What is Loose Truncation in Exchange 2013

When normal Log truncation will happen

The following criteria must be met for a database copy's log file to be truncated when lag settings are left at their default values of 0 (disabled)

1. The log file must have been successfully backed up, or circular logging must be enabled.
2. The log file must be below the checkpoint (the minimum log file required for recovery) for the database.
3. All other lagged copies must have inspected the log file.
4. All other copies (not lagged copies) must have replayed the log file.

Transaction log disk goes out of space, If the Log truncation is not happening.

Truncation behaviour in Exchange 2013

In Exchange 2013 log truncation doesn't occur on an active mailbox database copy  when one or more passive copies are suspended or not healthy (even though backup completes or circular logging enabled). This is because the transaction logs being generated on the active copy will be required to bring the now-suspended copies up-to-date when replication resumes.

If planned maintenance activities are going to take  for several days, you may have considerable log file buildup. To prevent the log drive from filling up with transaction logs, you can remove the affected passive database copy instead of suspending it. When the planned maintenance is completed, you can re-add (Re-Seed) the passive database copy.

What is Loose Transaction

Exchange 2013 Service Pack 1 (SP1) introduces a new feature called loose truncation, which is disabled by default. When it's enabled, each database copy measures the free disk space on the drive holding the replicated logs. Loose truncation kicks in when a low-space threshold is exceeded (The default is 200GB). No UI is available in EAC to control the low-space threshold for database copies, so this value must be set in the system registry on each server that holds passive copies. 

How to Configure

To configure loose truncation, you need to create three registry entries on each DAG member server. All three entries need to be created under the registry key HKLM\Software\Microsoft\ExchangeServer\V15\BackupInformation and need to use DWORD values



When loose truncation is enabled, Active Manager on the server holding the active database copy continues to calculate and publish truncation point (the generation number of the logs that are no longer required)  to all passive copies except the passive copy that has the most logs to reply (oldest straggler).

For Example

if a database has three copies, one of which is offline, Active Manager might do the following:

Current log generation: 173,501
Copy 1 log replay queue: 20
Copy 2 log replay queue: 1
Copy 3 log replay queue: 110,200 (suspended)

Copy 3 is the oldest straggler, so it's ignored by Active Manager. Copy 1 has the largest queue (20), so the truncation point is calculated at 173,481 (173,501 − 20). Active Manager therefore advises all database copies that they can truncate logs up to generation 173,481. 

Still Exchange defines a threshold value for the minimum number of transaction logs that it needs to protect, or the number of logs that should be retained for active and passive copies, even when disk space usage is becoming low (below the threshold set at the registry). By default, for the active copy, Exchange keeps an additional 10,000 logs over what it calculates should be truncated and an additional 100,000 logs for passive copies.

When the suspended database copy is brought back online, there will be a log file missing and hence exchange puts the database copy into FailedAndSuspended state. If AutoReseed is not configured, an administrator have to Re-Seed the database copy from active database copy.

Important

LooseTruncation_MinCopiesToProtect entry controls whether loose truncation isn't in use (the default value of 0) or is in use. If the specified value is less than the number of passive copies (for example, set to 1 when there are two passive copies of each database), loose truncation is enabled. If the specified value is greater than or equal to the number of passive copies, loose truncation isn't used. The feature is enabled, but it's blocked because of the high value.



Wednesday, 15 April 2015

What You Need to Know about Microsoft Nano Server

Nano Server, a purpose-built operating system designed to run born-in-the-cloud applications and containers. As customers adopt modern applications and next-generation cloud technologies, they need an OS that delivers speed, agility and lower resource consumption.
1. What is Nano Server?
Nano Server is a pared down headless version of Windows Server that Microsoft has been developing under the code name Tuva. It is designed to run services and to be managed completely remotely. Microsoft describes Nano Server as “a purpose-built operating system designed to run born-in-the-cloud applications and containers.”
2. How is Nano Server different from Windows Server?
First, Nano Server will be completely headless – there’s no GUI. Next, Nano will be will have a much smaller footprint than Windows Server – even significantly smaller than Windows Server Core. Microsoft states Nano Server will have a 93% smaller VHD size, 92% fewer critical bulletins and 80% fewer required reboots. A smaller OS results in fewer operating system components to maintain with less security exposures than the current Windows Server operating system. This can also improve scalability. This Microsoft Channel 9 video shows a Nano Server with 1TB of RAM running 1000 Nano Server VMs.
3. Will Nano server have any sort of graphical user interface or local management?
Nano will not have a graphical user interface and unlike Windows Server Core it will also have no command prompt and no PowerShell console. Even more, Nano Server will not have a local login. It is designed entirely to support services.
4. Can Nano Server run regular Windows applications?
No. You cannot run traditional Windows GUI applications on Nano Server. Instead, Nano Server is designed to provide infrastructure services.
5. If Nano Server doesn’t run Windows applications what does it run?
Microsoft puts forwards two core scenarios for Nano Server. Server Cloud infrastructure services such as Hyper-V, Hyper-V cluster, and Scale-Out File Servers (SOFSs) and born-in-the-cloud applications that are running in virtual machines, containers, or on development platforms that do not require a UI on the server. Nano Server will support a number of different runtimes including: C#, Java, Node.js, and Python. Nana Server will be API-compatible with Windows Server within the subset of components Nano provides.
6. Besides the GUI and command shell what else did Microsoft remove from Windows Server to make Nano Server?
In addition to dropping the graphical user interface and command shells Microsoft also eliminated 32-bit support (WOW64), MSI installer support and many default Server Core components.
7. How do you manage Nano Server if there’s no GUI and no command prompt?
All management of Nano will be performed remotely using WMI and PowerShell. Microsoft has also stated that Nano will have Windows Server Roles and Features support using Features on Demand and DISM (Deployment Image Servicing and Management). Nano will also support remote file transfer, remote script authoring and remote debugging including remote debugging from Visual Studio. Microsoft also stated that they will provide a new Web-based management tool for Nano Server.
8. Will Nano Server replace Windows Server?
No. Nano Server is a specialized infrastructure server. Microsoft will continue to release new versions of Windows Server as a general purpose server operating system for the foreseeable future.
9. When will Nano server be released?
Microsoft has not stated when Nano Server will be available but it’s expected to be released with the next version of Windows Server in 2016.
10. Where can I find more information about Nano Server?
You can learn more about the upcoming Windows Nano Server at the Windows Server Blog. In addition, Microsoft will be releasing more information about Nano Server at BUILD and Ignite.

Tuesday, 14 April 2015

Dynamic Quorum in Exchange 2013 DAG



Before get into Dynamic Quorum, we should understand what is Quorum model and types of Quorum used in Exchange high availability feature.

What is Quorum

The quorum configuration in a failover cluster determines the number of failures that the cluster can sustain while still remaining online.  If an additional failure occurs beyond this threshold, the cluster will stop running. Generally quorum is used to avoid split brain on the cluster environment.

Quorum Types

Majority Node Set [MNS] is a Windows Clustering model used since early versions of Exchange. This model requires 50% of the voters (servers and/or one file share witness) to be up and running.

We should understand two major types of quorum which we used frequently in exchange servers

1. Node and File share majority

DAGs with an even number of members use the failover cluster's Node and File Share Majority quorum mode, which uses an external witness server that acts as a tie-breaker. In this quorum mode, each DAG member gets a vote. In addition, the witness server is used to provide one DAG member with a weighted vote. 

Any member of the DAG that can communicate with the witness server can place a Server Message Block [SMB] lock on the witness server's witness.log file. The DAG member that locks the witness server (the locking node) retains an additional vote for quorum purposes. 

2. Node Majority

DAGs with an odd number of members use the failover cluster's Node Majority quorum mode. In this mode, each member gets a vote and each member's local system disk is used to store the cluster quorum data. If the configuration of the DAG changes, that change is reflected across the different disks. 

What is Dynamic Quorum

Windows Server 2012 introduced a new model called Failover Clustering Dynamic Quorum, which we can use with Exchange. When using Dynamic Quorum, the cluster dynamically manages the vote assignment to nodes based on the state of each node. When a node shuts down or crashes, it loses its quorum vote. When a node successfully re-joins the cluster, it regains its quorum vote. By dynamically adjusting the assignment of quorum votes, the cluster can increase or decrease the number of quorum votes that are required to keep it running. This enables the cluster to maintain availability during sequential node failures or shutdowns.

The advantage this brings, is that it is now possible for a cluster to run even if the number of nodes remaining in the cluster is less than 50%! By dynamically adjusting the quorum majority requirement, the cluster can sustain sequential node shutdowns down to a single node and still keep running.

We can enable Dynamic Quorum by editing DAG properties from failover cluster management console.

How it works

To explain this consider three node DAG. Normally we should keep two nodes up to keep cluster online. But by using Dynamic Quorum feature we can run cluster even with one node up.


When the first node is down, dynamic quorum removes the vote from the node with the lowest ID. This is because with only two nodes remaining in a cluster we cannot have a majority as the majority of two is two. So, in order to avoid the cluster from shutting down, one of the votes is removed, thus only requiring one vote to maintain the cluster.

Example

DAG and Database setup



Cluster setup


Cluster after node1 is down


Databases after node1 is down


Cluster after node2 is down


Database after node2 is down


Even after two nodes are down from a three node DAG, cluster is still up and databases are mounted on the server which is remaining up.

What is Data Loss Prevention




DLP is a system designed to detect a potential data leakage incident in a timely manner and prevent it. When this happens, sensitive data such as personal/company information, credit card details, social security numbers, etc., is disclosed to unauthorized users either with malicious intent or by mistake. This has always been an important matter for most companies as the loss of sensitive data can be very damaging for a business.

To prevent or monitor the data leakage, exchange administrators had to rely on some 3rd party applications. But now Microsoft introduced a feature called DLP with Exchange server 2013 to achieve this.

How DLP Works

DLP works through DLP Policies, packages that contain a set of conditions made up of rules, actions and exceptions. These packages are based on Transport Rules and can be created in the Exchange Administration Center [EAC] or through the Exchange Management Shell [EMS]. Once created and activated, they will start analyzing and filtering e-mails. A nice feature is that you can create a DLP Policy without activating it, allowing you to test its behavior without affecting mail flow.

DLP Policies are nothing more than special Transport Rules. Because the transport rules with Exchange 2010 didn’t provide the means to properly analyze e-mail content, new types of transport rules were created in Exchange 2013 to make DLP possible. These allow information inside e-mails to be checked and classified as sensitive (or non-sensitive) based on keywords, dictionaries or even regular expressions, thus determining if an e-mail violates any organizational DLP Policies.

Another nice feature of DLP is called Policy Tips. These tips, similar to the MailTips introduced in Exchange 2010, inform senders that they might be violating a DLP Policy before they actually send the message! As we will see in the second part of this article, these Policy Tips only work on Outlook 2013 for now but it is just a matter of time until they appear in Outlook Web App as well.

DLP Operation Modes

Enforce Policy: the policy is enabled and all actions specified in the policy will be carried out

Test Policy with Notifications: the policy is enabled but the actions will not be executed, just logged into Message Tracking Logs. Policy Tips are displayed to users

Test Policy without Notifications: similar as above but no Policy Tips are displayed to users

There are three ways to create DLP policy

1. Through MS provided templates

The quickest way to start using DLP policies is to create and implement a new policy using a template. This saves you the effort of building a new set of rules from nothing. You will need to know what type of data you want to check for or which compliance regulation you are attempting to address.

2. Import a pre-built policy file from outside your organization

You can import policies that have already been created outside of your messaging environment by independent software vendors. In this way you can extend the DLP solutions to suit your business requirements

3. Create a custom policy without any pre-existing conditions.   Your enterprise may have its own requirements for monitoring certain types of data known to exist within a messaging system. You can create a custom policy entirely on your own in order to start checking and acting upon your own unique message data. You will need to know the requirements and constraints of the environment in which the DLP policy will be enforced in order to create such a custom policy.

General Notes

1. After creating DLP polcies from defined templates, we can create a new rule to override the policy. For Eg: "Let’s add a new rule to automatically override the policy if the e-mail comes from the CEO." 

2. We have to specify one "DLP Incident Mailbox" while creating rule. This mailbox will receive notification about each actions performed by DLP policies.
3. While creating custom DLP policy we can add sensitive information types (Passport numbers, Driving Licence numbers etc) and thresholds to take action.

Document Fingerprinting is a Data Loss Prevention (DLP) feature that converts a standard form into a sensitive information type, which you can use to define transport rules and DLP policies. For example, you can create a document fingerprint based on a blank patent template and then create a DLP policy that detects and blocks all outgoing patent templates with sensitive content filled in.

Sample sensitive information types



4. Thresholds

Minimum count: sets the lowest number of incidents at which the rule will be triggered. For example, if you set this field to 5 and the algorithm only detects 3 passport numbers in a message, the rule will do nothing;
Maximum count: sets the highest number of incidents at which the rule will be triggered;
Minimum confidence level: sets the lowest confidence level at which the rule will be triggered. Similar to a Spam Confidence Level, the algorithm used is obviously not 100% accurate so this field allows us to tweak the detection “certainty” if necessary;
Maximum confidence level: sets the maximum confidence level at which the rule will be triggered.




5. Policy Tips



6. Message Tracking Log for the blocked email also the sender will receive an NDR.





Permanently Clear Previous Mailbox Info for EXO Exchange GUID sync issues

Microsoft is introducing a new parameter that can be called by using the Set-User cmdlet in Exchange Online PowerShell. The new para...