Zimbra MTA

The Zimbra MTA (Mail Transfer Agent) receives mail via SMTP and routes each message, using Local Mail Transfer Protocol (LMTP), to the appropriate Zimbra mailbox server.

The Zimbra MTA server includes the following programs:

Postfix MTA, for mail routing, mail relay, and attachment blocking
Clam AntiVirus, an antivirus engine used for scanning email messages and attachments in email messages for viruses
SpamAssassin and DSPAM, mail filters that attempt to identify unsolicited commercial email (spam), using a variety of mechanisms
Amavisd-New, a Postfix content filter used as an interface between Postfix and ClamAV / SpamAssassin

In the Zimbra Collaboration Suite configuration, mail transfer and delivery are distinct functions. Postfix primarily acts as a Mail Transfer Agent (MTA) and the Zimbra mail server acts as a Mail Delivery agent (MDA).

MTA configuration is stored in LDAP and a configuration script automatically polls the LDAP directory every two minutes for modifications, and updates the Postfix configuration files with the changes.

Zimbra MTA Deployment

The Zimbra Collaboration Suite includes a precompiled version of Postfix. This version does not have any changes to the source code, but it does include configuration file modifications, additional scripts, and tools.

Postfix performs the Zimbra mail transfer and relay. It receives inbound messages via SMTP, and hands off the mail messages to the Zimbra server via LMTP, as shown in the following figure. The Zimbra MTA can also perform anti-virus and anti-spam filtering.

Postfix also plays a role in transfer of outbound messages. Messages composed from the Zimbra web client are sent by the Zimbra server through Postfix, including messages sent to other users on the same Zimbra server.

Figure 6: Postfix in a Zimbra Environment

6 MTA.5.1.1.jpg

*The term “edge MTA” is a generic term referring to any sort of edge security solution for mail. You may already deploy such solutions for functions such as filtering. The edge MTA is optional. Some filtering may be duplicated between an edge MTA and the Zimbra MTA.

Postfix Configuration Files

Zimbra modified the following Postfix files specifically to work with the Zimbra Collaboration Suite:

main.cf – Modified to include the LDAP tables. The configuration script in the Zimbra MTA pulls data from the Zimbra LDAP and modifies the Postfix configuration files.
master.cf – Modified to use Amavisd-New.

Important: Do not modify the Postfix configuration files directly! Some of the Postfix files are rewritten when changes are made in the administration console. Any changes you make will be overwritten.

MTA Functionality

Zimbra MTA Postfix functionality includes:

SMTP authentication
Attachment blocking
Relay host configuration
Postfix-LDAP integration
Integration with Amavisd-New, ClamAV, and Spam Assassin

SMTP Authentication

SMTP authentication allows authorized mail clients from external networks to relay messages through the Zimbra MTA. The user ID and password is sent to the MTA when the SMTP client sends mail so the MTA can verify if the user is allowed to relay mail.

Note: User authentication is provided through the Zimbra LDAP directory server, or if implemented, through the Microsoft Active Directory Sever.

SMTP Restrictions

In the administration console, you can enable restrictions so that messages are not accepted by Postfix when non-standard or other disapproved behavior is exhibited by an incoming SMTP client. These restrictions provide some protection against ill-behaved spam senders. By default, SMTP protocol violators (that is, clients that do not greet with a fully qualified domain name) are restricted. DNS based restrictions are also available.

Important: Understand the implications of these restrictions before you implement them. You may want to receive mail from people outside of your mail system, but those mail systems may be poorly implemented. You may have to compromise on these checks to accommodate them.

Relay Host Settings

Postfix can be configured to send all non-local mail to a different SMTP server. Such a destination SMTP server is commonly referred to as a “relay” or “smart” host. You can set this relay host from the administration console.

A common use case for a relay host is when an ISP requires that all your email be relayed through designated host, or if you have some filtering SMTP proxy server.

In the administration console, the relay host setting must not be confused with web mail MTA setting. Relay host is the MTA to which Postfix relays non-local email. Webmail MTA is used by the Zimbra server for composed messages and must be the location of the Postfix server in the Zimbra MTA package.

Important: Use caution when setting the relay host to prevent mail loops

MTA-LDAP Integration

The Zimbra LDAP directory service is used to look up email delivery addresses. The version of Postfix included with Zimbra is configured during the installation of the Zimbra Collaboration Suite to use the Zimbra LDAP directory.

Account Quota and the MTA

Account quota is the storage limit allowed for an account. Account quotas can be set by COS or per account. The MTA attempts to deliver a message, and if a Zimbra user’s mailbox exceeds the set quota, the Zimbra mailbox server rejects the message as mailbox is full and the sender gets a bounce message. You can view account quotas from the Administration Console, Monitoring Server Statistics section.

MTA and Amavisd-New Integration

The Amavisd-New utility is the interface between the Zimbra MTA and Clam AV and SpamAssassin scanners.

Anti-Virus Protection

Clam AntiVirus software is bundled with the Zimbra Collaboration Suite as the virus protection engine. The Clam anti-virus software is configured to block encrypted archives, to send notification to administrators when a virus has been found, and to send notification to recipients alerting that a mail message with a virus was not delivered.

The anti-virus protection is enabled during installation. You can also enable or disable virus checking from Global Settings on the administration console. By default, the Zimbra MTA checks every two hours for any new anti-virus updates from ClamAV.

Note: Updates are obtained via HTTP from the ClamAV website.

Anti-Spam Protection

SpamAssassin and DSPAM are spam filters bundled with ZCS. When ZCS is installed, spam training is automatically enabled to let users train spam filters when they move messages in and out of their junk folders.

The SpamAssassin default configuration for ZCS is as follows:

zimbraSpamKillPercent: Spaminess percentage beyond which a message is dropped. Default kill percent at 75%. Mail that is scored at 75% is considered spam and is not delivered. SpamAssassin score of 20 is considered 100%. 75% equates to a spam score of 15.
zimbraSpamTagPercent: Spaminess percentage beyond which a message is marked as spam. Default tag percent at 33%. Mail that is scored at 33% is considered spam and is delivered to the Junk folder. Since a SpamAssassin score of 20 equates to 100%, the zimbraSpamTagPercent would equate to a spam score of 6.6.

A Subject Prefix can be configured so messages considered as spam are identified in the subject line as tagged as spam. When a message is tagged as spam, the message is delivered to the recipient’s Junk folder.

You can change these settings from the administration console, Global Settings Anti-Spam tab.

Note: ZCS configures the spam filter to add 0.5 to the Spamassassin score if DSPAM marks the message as spam and deduct 0.1 if DSPAM does not label it as spam.

Anti-Spam Training Filters

When ZCS is installed, the automated spam training filter is enabled and two feedback mailboxes are created to receive mail notification.

Spam Training User to receive mail notification about mail that was not marked as junk, but should be.
Non-spam (HAM) training user to receive mail notification about mail that was marked as junk, but should not have been.

For these training accounts, the mailbox quota is disabled (i.e. set to 0) and attachment indexing is disabled. Disabling quotas prevents bouncing messages when the mailbox is full.

How well the anti-spam filter works depends on recognizing what is considered spam or not considered spam. The SpamAssassin filter can learn what is spam and what is not spam from messages that users specifically mark as Junk from their web client toolbar or Not Junk from the web client Junk folder. A copy of these marked messages is sent to the appropriate spam training mailbox.The Zimbra spam training tool, zmtrainsa, is configured to automatically retrieve these messages and train the spam filter.

The zmtrainsa script is enabled through a cron job to feed mail that has been classified as spam or as non-spam to the SpamAssassin application, allowing SpamAssassin to ‘learn’ what signs are likely to mean spam or ham. The zmtrainsa script empties these mailboxes each day.

By default all users can give feedback in this way. If you do not want users to train the spam filter, you can modify the global configuration attributes, zimbraSpamIsSpamAccount and zimbraSpamIsNotSpamAccount, and remove the spam/ham account addresses from the attributes. To remove, type as:

zmprov mcf <attribute> ‘’

Restart the Zimbra services, type zmcontrol stop and then zmcontrol start.

When these attributes are modified, messages marked as junk or not junk are not copied to the spam training mailboxes.

Initially, you may want to train the spam filter manually to quickly build a database of spam and non-spam tokens, words, or short character sequences that are commonly found in spam or ham. To do this, you can manually forward messages as message/rfc822 attachments to the spam and non-spam mailboxes. When zmtrainsa runs, these messages are used to teach the spam filter. Make sure you add a large enough sampling of messages to these mailboxes. In order to get accurate scores to determine whether to mark messages as spam at least 200 known spams and 200 known hams must be identified.

The zmtrainsa command can be run manually to forward any folder from any mailbox to the spam training mailboxes. To send a folder to the spam training mailbox, type the command as:

zmtrainsa <server> <user> <password> spam [foldername]

To send the to the non-spam training mailbox, type:

zmtrainsa <server> <user> <password> ham [foldername]

Password is not needed in 4.5.6+ see CLI_zmtrainsa

Turning On or Off RBLs

See Customizing the MTA for current information

Receiving and Sending Mail through Zimbra MTA

The Zimbra MTA delivers both the incoming and the outgoing mail messages. For outgoing mail, the Zimbra MTA determines the destination of the recipient address. If the destination host is local, the message is passed to the Zimbra server for delivery. If the destination host is a remote mail server, the Zimbra MTA must establish a communication method to transfer the message to the remote host. For incoming messages, the MTA must be able to accept connection requests from remote mail servers and receive messages for the local users.

In order to send and receive email, the Zimbra MTA must be configured in DNS with both an [B_app-glossary.16.1.html#1037278 A record] and a [B_app-glossary.16.1.html#1021370 MX Record]. For sending mail, the MTA use DNS to resolve hostnames and email-routing information. To receive mail, the MX record must be configured correctly to route messages to the mail server.

You must configure a relay host if you do not enable DNS. Even if a relay host is configured, an MX record is still required if the server is going to receive email from the internet.

Zimbra MTA Message Queues

When the Zimbra MTA receives mail, it routes the mail through a series of queues to manage delivery. The Zimbra MTA maintains four queues where mail is temporarily placed while being processed: incoming, active, deferred and hold.

6 MTA.5.1.2.jpg

Incoming.

The incoming message queue holds the new mail that has been received. Each message is identified with a unique file name. Messages in the incoming queue are moved to the active queue when there is room in the active queue. If there are no problems, message move through this queue very quickly.

Active.

The active message queue holds messages that are ready to be sent. The MTA sets a limit to the number of messages that can be in the active queue at any one time. From here, messages are moved to and from the anti-virus and anti-spam filters before being delivered or moved to another queue.

Deferred.

Message that cannot be delivered for some reason are placed in the deferred queue. The reasons for the delivery failures is documented in a file in the deferred queue. This queue is scanned frequently to resend the message. If the message cannot be sent after the set number of delivery attempts, the message fails. The message is bounced back to the original sender.

Verified Against: Zimbra Collaboration 8.0, 7.0 Date Created: 04/16/2014
Article ID: https://wiki.zimbra.com/index.php?title=Zimbra_MTA Date Modified: 07/13/2015
Advertisements

5 advantages of containers for writing applications

Containers can serve as a new way to package and architect applications. Pair them with DevOps and get ready for speed

Even Match.com could not have done a better job finding a mate for microservices. Microservices – single-function services built by small teams, independent from other functions, and communicating only through public interfaces – simply make a great match for containers. Microservices plus containers represent a shift to delivering applications through modular services that can be reused and rewired to perform new tasks.

Why do containers and writing apps go together so well?

Containerizing services like messaging, mobile app development and support, and integration lets developers build applications, integrate with other systems, orchestrate using rules and processes, and then deploy across hybrid environments.

[ How are your peers using containers – and what speed bumps should you avoid? See our related article, 4 container adoption patterns: What you need to know. ]

But don’t think of this as merely putting middleware into the cloud in its traditional form. Think of it as reimagining enterprise app development for faster, easier, and less error-prone provisioning and configuration. That adds up to more productive – and hopefully, less stressed – developers, especially at a time when speed is a core requirement for business.

When apps meet containers

One key idea behind microservices: Instead of large monolithic applications, application design will increasingly use architectures composed of small, single-function, independent services that communicate through network interfaces. This suits agile and DevOps approaches, and reduces the unintended effects associated with making changes in one part of a large monolithic program.

Linux containers can technically encapsulate monolithic applications effectively, just as if they were in a virtual machine or on a “bare metal” physical server. However, modern standards-compliant Linux container technology encourages breaking down applications into their separate processes and provides the tools to do so. (The Open Container Initiative – OCI – maintains standard runtime and image specifications for containers.)

This granular approach has several advantages:

1. Modularity equals flexibility

The current approach to containerization emphasizes the ability to update, restart, and scale components of an application independently – without unnecessarily taking down the whole app. In addition to this microservices-based approach, you can share functionality among multiple apps in much the same manner as service-oriented architectures more broadly. This means you’re not rewriting common functions (often in subtly incompatible ways) for every application.

2. Layers and image version control: DevOps win

Each container image file is made up of a series of layers. When the image changes, a new layer is created that’s essentially a set of filesystem changes. Configuration metadata such as environment variables or default arguments are properties of the image as a whole rather than any particular layer.

A variety of projects can be used to create images. These include the upstream Docker project, which requires a Dockerfile and a runtime daemon, while Buildah from Project Atomic can build a container from scratch.

The image layers are reused when building a new container image. This makes the build process fast and has tremendous advantages for organizations applying DevOps practices like continuous integration and deployment (CI/CD). Intermediate changes are shared between images, further improving speed, size, and efficiency. Inherent to layering is version control. Every time there’s a new change, you essentially get a built-in change-log.

3. Rollback: Fail fast safely

Perhaps the best part about layering is the ability to roll back. Every image has layers. Don’t like the current iteration of an image? Roll it back to the previous version. This further supports an agile development approach and helps make CI/CD a reality from a tools perspective.

4. Rapid deployment: Precious time gains

Getting new hardware up, running, provisioned, and available used to take days. And the level of effort and overhead was burdensome. OCI-compliant containers can reduce deployment to seconds. By creating a container for each process, developers can quickly share those similar processes with new apps.

And because an operating system doesn’t need to restart in order to add or move a container, deployment times are substantially shorter.

Think of technology as being in support of a more granular, controllable, microservices-oriented approach that places greater value on efficiency.

5. Orchestration: Take it to the next level

An OCI-compliant container runtime by itself is very good at managing single containers. However, when you start using more and more containers and containerized apps, broken down into hundreds of pieces, management and orchestration gets tricky. Eventually, you need to take a step back and group containers to deliver services – such as networking, security, and telemetry – across your containers.

Furthermore, because containers are portable, it’s important that the management stack that’s associated with them be portable as well. That’s where orchestration technologies like Kubernetes come in, simplifying this need for IT.

Rethinking applications

While containers can be used simply to encapsulate and isolate applications in a similar manner to virtual machines, they’re most effective when used as a fundamentally new way of packaging and architecting applications. Do this and pair them up with more agile and iterative DevOps processes, and you get apps that are more flexible, more reusable, and delivered more quickly.

For much more on containers and how they rewrite ideas about software packaging and development process, get my new book, which I wrote with my colleague William Henry: From Pots and Vats to Programs and Apps, freely downloadable at https://goo.gl/FSfgky.

Source: https://enterprisersproject.com/article/2017/8/5-advantages-containers-writing-applications

What is Kubernetes?

Kubernetes, or k8s (k, 8 characters, s…get it?), or “kube” if you’re into brevity, is an open source platform that automates Linux container operations. It eliminates many of the manual processes involved in deploying and scaling containerized applications. In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters. These clusters can span hosts across publicprivate, or hybrid clouds.

Kubernetes was originally developed and designed by engineers at Google. Google was one of the early contributors to Linux container technology and has talked publicly about how everything at Google runs in containers. (This is the technology behind Google’s cloud services.) Google generates more than 2 billion container deployments a week—all powered by an internal platform: Borg. Borg was the predecessor to Kubernetes and the lessons learned from developing Borg over the years became the primary influence behind much of the Kubernetes technology.

Fun fact: The seven spokes in the Kubernetes logo refer to the project’s original name, “Project Seven of Nine.”

Red Hat was one of the first companies to work with Google on Kubernetes, even prior to launch, and has become the 2nd leading contributor to Kubernetes upstream project. Google donated the Kubernetes project to the newly formed Cloud Native Computing Foundation in 2015.


Why do you need Kubernetes?

Real production apps span multiple containers. Those containers must be deployed across multiple server hosts. Kubernetes gives you the orchestration and management capabilities required to deploy containers, at scale, for these workloads. Kubernetes orchestration allows you to build application services that span multiple containers, schedule those containers across a cluster, scale those containers, and manage the health of those containers over time.

Kubernetes also needs to integrate with networking, storage, security, telemetry and other services to provide a comprehensive container infrastructure.

Kubernetes explained - diagram

Of course, this depends on how you’re using containers in your environment. A rudimentary application of Linux containers treats them as efficient, fast virtual machines. Once you scale this to a production environment and multiple applications, it’s clear that you need multiple, colocated containers working together to deliver the individual services. This significantly multiplies the number of containers in your environment and as those containers accumulate, the complexity also grows.

Kubernetes fixes a lot of common problems with container proliferation—sorting containers together into a ”pod.” Pods add a layer of abstraction to grouped containers, which helps you schedule workloads and provide necessary services—like networking and storage—to those containers. Other parts of Kubernetes help you load balance across these pods and ensure you have the right number of containers running to support your workloads.

With the right implementation of Kubernetes—and with the help of other open source projects like Atomic RegistryOpen vSwitchheapsterOAuth, and SELinux— you can orchestrate all parts of your container infrastructure.


What can you do with Kubernetes?

The primary advantage of using Kubernetes in your environment is that it gives you the platform to schedule and run containers on clusters of physical or virtual machines. More broadly, it helps you fully implement and rely on a container-based infrastructure in production environments. And because Kubernetes is all about automation of operational tasks, you can do many of the same things that other application platforms or management systems let you do, but for your containers.

With Kubernetes you can:

  • Orchestrate containers across multiple hosts.
  • Make better use of hardware to maximize resources needed to run your enterprise apps.
  • Control and automate application deployments and updates.
  • Mount and add storage to run stateful apps.
  • Scale containerized applications and their resources on the fly.
  • Declaratively manage services, which guarantees the deployed applications are always running how you deployed them.
  • Health-check and self-heal your apps with autoplacement, autorestart, autoreplication, and autoscaling.

Kubernetes, however, relies on other projects to fully provide these orchestrated services. With the addition of other open source projects, you can fully realize the power of Kubernetes. These necessary pieces include (among others):

  • Registry, through projects like Atomic Registry or Docker Registry.
  • Networking, through projects like OpenvSwitch and intelligent edge routing.
  • Telemetry, through projects such as heapster, kibana, hawkular, and elastic.
  • Security, through projects like LDAP, SELinux, RBAC, and OAUTH with multi-tenancy layers.
  • Automation, with the addition of Ansible playbooks for installation and cluster life-cycle management.
  • Services, through a rich catalog of precreated content of popular app patterns.

Learn to speak Kubernetes

Like any technology, there are a lot of words specific to the technology that can be a barrier to entry. Let’s break down some of the more common terms to help you understand Kubernetes.

Master: The machine that controls Kubernetes nodes. This is where all task assignments originate.

Node: These machines perform the requested, assigned tasks. The Kubernetes master controls them.

Pod: A group of one or more containers deployed to a single node. All containers in a pod share an IP address, IPC, hostname, and other resources. Pods abstract network and storage away from the underlying container. This lets you move containers around the cluster more easily.

Replication controller:  This controls how many identical copies of a pod should be running somewhere on the cluster.

Service: This decouples work definitions from the pods. Kubernetes service proxies automatically get service requests to the right pod—no matter where it moves to in the cluster or even if it’s been replaced.

Kubelet: This service runs on nodes and reads the container manifests and ensures the defined containers are started and running.

kubectl: This is the command line configuration tool for Kubernetes.


Using Kubernetes in production

Kubernetes is open source. And, as such, there’s not a formalized support structure around that technology—at least not one you’d trust your business on. If you had an issue with your implementation of Kubernetes, while running in production, you’re not going to be very happy. And your customers probably won’t, either.

That’s where Red Hat OpenShift comes in. OpenShift is Kubernetes for the enterprise—and a lot more. OpenShift includes all of the extra pieces of technology that makes Kubernetes powerful and viable for the enterprise, including: registry, networking, telemetry, security, automation, and services. With OpenShift, your developers can make new containerized apps, host them, and deploy them in the cloud with the scalability, control, and orchestration that can turn a good idea into new business quickly and easily.

Best of all, OpenShift is supported and developed by the #1 leader in open source, Red Hat.


A look at how Kubernetes fits into your infrastructure

Kubernetes diagram

Kubernetes runs on top of an operating system (Red Hat Enterprise Linux Atomic Host, for example) and interacts with pods of containers running on the nodes. The Kubernetes master takes the commands from an administrator (or DevOps team) and relays those instructions to the subservient nodes. This handoff works with a multitude of services to automatically decide which node is best suited for the task. It then allocates resources and assigns the pods in that node to fulfill the requested work.

So, from an infrastructure point of view, there is little change to how you’ve been managing containers. Your control over those containers happens at a higher level, giving you better control without the need to micromanage each separate container or node. Some work is necessary, but it’s mostly a question of assigning a Kubernetes master, defining nodes, and defining pods.

What about docker?

The docker technology still does what it’s meant to do. When kubernetes schedules a pod to a node, the kubelet on that node will instruct docker to launch the specified containers. The kubelet then continuously collects the status of those containers from docker and aggregates that information in the master. Docker pulls containers onto that node and starts and stops those containers as normal. The difference is that an automated system asks docker to do those things instead of the admin doing so by hand on all nodes for all containers.

Source: https://www.redhat.com/en/topics/containers/what-is-kubernetes

3 Cool Linux Service Monitors

The Linux world abounds in monitoring apps of all kinds. We’re going to look at my three favorite service monitors: Apachetop, Monit, and Supervisor. They’re all small and fairly simple to use. apachetop is a simple real-time Apache monitor. Monit monitors and manages any service, and Supervisor is a nice tool for managing persistent scripts and commands without having to write init scripts for them.

Monit

Monit is my favorite, because provides the perfect blend of simplicity and functionality. To quote man monit:

monit is a utility for managing and monitoring processes, files, directories and filesystems on a Unix system. Monit conducts automatic maintenance and repair and can execute meaningful causal actions in error situations. E.g. Monit can start a process if it does not run, restart a process if it does not respond and stop a process if it uses too much resources. You may use Monit to monitor files, directories and filesystems for changes, such as timestamps changes, checksum changes or size changes.

Monit is a good choice when you’re managing just a few machines, and don’t want to hassle with the complexity of something like Nagios or Chef. It works best as a single-host monitor, but it can also monitor remote services, which is useful when local services depend on them, such as database or file servers. The coolest feature is you can monitor any service, and you will see why in the configuration examples.

Let’s start with its simplest usage. Uncomment these lines in /etc/monit/monitrc:

 set daemon 120
 set httpd port 2812 and
     use address localhost  
     allow localhost        
     allow admin:monit      

Start Monit, and then use its command-line status checker:

$ sudo monit
$ sudo monit status
The Monit daemon 5.16 uptime: 9m 

System 'studio.alrac.net'
  status                  Running
  monitoring status       Monitored
  load average            [0.17] [0.23] [0.14]
  cpu                     0.8%us 0.2%sy 0.5%wa
  memory usage            835.7 MB [5.3%]
  swap usage              0 B [0.0%]
  data collected          Mon, 04 Sep 2017 13:04:59

If you see the message “/etc/monit/monitrc:289: Include failed — Success ‘/etc/monit/conf.d/*'” that is a bug, and you can safely ignore it.

Monit has a built-in HTTP server. Open a Web browser to http://localhost:2812. The default login is admin, monit, which is configured in /etc/monit/monitrc. You should see something like Figure 1 (below).

Click on the system name to see more statistics, including memory, CPU, and uptime.

That is fun and easy, and so is adding more services to monitor, like this example for the Apache HTTP server on Ubuntu.

check process apache with pidfile /var/run/apache2/apache2.pid
    start program = "service apache2 start" with timeout 60 seconds
    stop program  = "service apache2 stop"
    if cpu > 80% for 5 cycles then restart
    if totalmem > 200.0 MB for 5 cycles then restart
    if children > 250 then restart
    if loadavg(5min) greater than 10 for 8 cycles then stop
    depends on apache2.conf, apache2
    group server    

Use the appropriate commands for your Linux distribution. Find your PID file with this command:

echo $(. /etc/apache2/envvars && echo $APACHE_PID_FILE)

The various distros package Apache differently. For example, on Centos 7 use systemctl start/stop httpd.

After saving your changes, run the syntax checker, and then reload:

$ sudo monit -t
Control file syntax OK
$ sudo monit reload
Reinitializing monit daemon

This example shows how to monitor key files and alert you to changes. The Apache binary should not change, except when you upgrade.

    check file apache2
    with path /usr/sbin/apache2
    if failed checksum then exec "/watch/dog"
       else if recovered then alert

This example configures email alerting by adding my mailserver:

set mailserver smtp.alrac.net

monitrc includes a default email template, which you can tweak however you like.

man monit is well-written and thorough, and tells you everything you need to know, including command-line operation, reserved keywords, and complete syntax description.

apachetop

apachetop is a simple live monitor for Apache servers. It reads your Apache logs and displays updates in realtime. I use it as a fast easy debugging tool. You can test different URLs and see the results immediately: files requested, hits, and response times.

$ apachetop
last hit: 20:56:39         atop runtime:  0 days, 00:01:00             20:56:56
All:           12 reqs (   0.5/sec)         22.4K (  883.2B/sec)    1913.7B/req
2xx:       6 (50.0%) 3xx:       4 (33.3%) 4xx:     2 (16.7%) 5xx:     0 ( 0.0%)
R ( 30s):      12 reqs (   0.4/sec)         22.4K (  765.5B/sec)    1913.7B/req
2xx:       6 (50.0%) 3xx:       4 (33.3%) 4xx:     2 (16.7%) 5xx:     0 ( 0.0%)

 REQS REQ/S    KB KB/S URL
    5  0.19  17.2  0.7*/
    5  0.19   4.2  0.2 /icons/ubuntu-logo.png
    2  0.08   1.0  0.0 /favicon.ico

You can specify a particular logfile with the -f option, or multiple logfiles like this: apachetop -f logfile1 -f logfile2. Another useful option is -l, which makes all URLs lowercase. If the same URL appears as both uppercase and lowercase it will be counted as two different URLs.

Supervisor

Supervisor is a slick tool for managing scripts and commands that don’t have init scripts. It saves you from having to write your own, and it’s much easier to use than systemd.

On Debian/Ubuntu, Supervisor starts automatically after installation. Verify with ps:

$ ps ax|grep supervisord
 7306 ?        Ss     0:00 /usr/bin/python 
   /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf

Let’s take our Python hello world script from last week to practice with. Set it up in /etc/supervisor/conf.d/helloworld.conf:

[program:helloworld.py]
command=/bin/helloworld.py
autostart=true
autorestart=true
stderr_logfile=/var/log/hello/err.log
stdout_logfile=/var/log/hello/hello.log

Now Supervisor needs to re-read the conf.d/ directory, and then apply the changes:

$ sudo supervisorctl reread
$ sudo supervisorctl update

Check your new logfiles to verify that it’s running:

$ sudo supervisorctl reread
helloworld.py: available
carla@studio:~$ sudo supervisorctl update
helloworld.py: added process group
carla@studio:~$ tail /var/log/hello/hello.log
Hello World!
Hello World!
Hello World!
Hello World!

See? Easy.

Visit Supervisor for complete and excellent documentation.

4 Awesome ways to find top memory consuming processes in Linux.

1. Finding out  top memory consuming processes in Linux using ps command.

There is one liner code available with ps command which will help you  to find top memory consuming processes in Linux.

Command:

Sample Output:

Here output get sorted according to memory utilisation which will help you find out top memory consuming processes in Linux very easily.

2. Continuously monitoring top memory consuming processes in Linux.

If in case you need to monitor the output continuously. Below command which is using watch command comes very handy.

Command:

Sample output:

3. Top memory consuming processes in Linux using top command.

The same output of the ps command can also be achieved using the native top command in Linux to find top memory consuming processes in Linux.

Command:

Sample output:

4. Find Top memory consuming processes in Linux using htop command.

There is one more utility called htop which will help you to find the top cpu consuming processes in Linux.  In case its not installed by default follow this article.

Command:

Once you execute htop command, a continuous running window will open same like top as below:

top memory consuming processes in Linux

In order to sort out the processes by memory utilisation  simply press “F6” button and then select memory and hit enter. You will able to see the processes sorted according to memory utilisation as below:

 top memory consuming processes in Linux

Source: https://linuxroutes.com/4-awesome-ways-to-find-top-memory-consuming-processes-in-linux/

Tail a Log File on Windows & Linux

It turns out there are a bunch of people on StackOverflow looking for ways to tail a log file, but there don’t appear to be many resources for all the different tips and tools to do this. If you’re a Java or .NET developer, just getting started with tailing log files, or a seasoned developer who needs something quick and easy to set up and go, there are several options. In fact, there may be too many.

Check out some tools I found that make tailing a log file a walk in the park. Tailing multiple log files? Want to tail logs remotely from a web browser? This list covers a whole array of needs.

 

The Standard Linux Tail Command

The de facto standard for linux systems is the ever-handy “tail” command. Need I say more?

$ tail -f /var/log/syslog -f /var/log/myLog.log

  • Quick and easy
  • No additional installation required
  • Can be used with multiple –f filenames in the same window as shown in the script example above
  • Unix only. See Tail for Win32 at the bottom of this post for a port to Windows.

less -F

Less +F

Brian Storti (@brianstorti) offers an alternative method to the standard tail -f: less +F, which causes similar behavior as tail -f but allows for easily switching between navigation and watching mode. However, he notes that if you need to watch multiple files simultaneously, the standard tail -f offers better output.

  • Easy to use
  • Creates behavior comparable to tail -f
  • Allows for easy switching between navigation and watching mode
  • Not ideal for watching multiple files simultaneously

 

Windows Powershell

Windows Powershell

Powershell is one of the most overlooked windows apps for ops. This approach doesn’t have any extra features but can be perfect for opening a quick commandlet window and keeping an eye on the status of a file.

Use the following simple syntax to show the tail end of a log file in real-time.

Get-Content myTestLog.log –Wait

You can also filter the log right at the command line using regular expressions:

Get-Content myTestLog.log -wait | where { $_ -match “WARNING” }

  • Quick and easy to get going
  • Practically zero learning curve
  • No additional installation necessary for newer windows machines
  • Requires Windows Powershell (duh!)
  • Slow for large files
  • Basic functionality but some 3rd party extensions are available. For example, you need multiple cmdlet windows to monitor multiple files
  • How-To Geek provides a step-by-step tutorial for getting tail-like functionality with Powershell that you may find useful.

 

Stackify Tail a Log

Stackify’s Retrace

Retrace is an APM tool that provides all the support you need to monitor and optimize your applications, including enhanced log management which fully indexes and tags your logs.  Retrace also tails log files in real-time.

  • Remotely tail log files via web browser
  • Search all log files, including iis logs
  • See how log files are trending and monitor specific logs
  • Supported for Windows & Linux systems
  • Free trial, low monthly cost

 

 

 

Vim log file tailing

Vim (Using Tail Bundle Plugin)

Developed by Martin Krischik (@martin_krischik), this handy-dandy plugin for Vim allows you to use “the best tail plugin you can get.”

  • Vim die-hards can tail log files without ever leaving their favorite editor!
  • Multiple file tailing using tabs
  • “Preview” window updated according to your Vim usage
  • Read the open issues on the Google Code page before installing
  • Check out Krischik’s other projects here

Emacs

Emacs

To tail a file in Emacs (@emacs): start Emacs, hit M-x (Alt and x keys together), and type “tail-file”. Then, enter the filename to tail. The net result is that this will spawn an external tail -f process. Emacs is much more than a tool for tailing log files, however; it’s packed with other features and functionality ranging from project planning tools to debugging, a mail and news reader, calendar, and more.

  • Customizable using Emacs Lisp code or a graphical interface
  • Packaging system for downloading and installing extensions
  • Unicode support for nearly all human scripts
  • Built-in documentation
  • Tutorial for new users
  • Content-aware editing modes including syntax highlighting

 

Multitail for UNIX log files

MultiTail

Developed by Folkert van Heusden (@flok99), MultiTail was inspired by wtail, which was written by André Majorel. This is one of the more complete UNIX offerings, in my humble opinion. It’s a relatively newer tool compared to some others on this list, with a stable version released in February 2015.

  • Uses wildcard matching to see if a more recently spawned logfile needs to be monitored
  • Uses regular expressions for filtering
  • Source code available in public Github repository
  • All major UNIX platforms supported

 

BareTail log tailing for Windows

BareTail

Developed by Bare Metal Software, BareTail is a free tool for monitoring log files in real-time. The “Bare” in the name might prompt some to ask, “How can you get any barer than regular Tail?” It turns out the name is a carryover from the software development group that built it, and this tool provides a color-coded GUI above and beyond good ‘ole Unix Tail.

  • Developed for Windows
  • Monitors multiple files using tabs
  • Configurable highlighting
  • Allows instant scrolling to any point in the file, even for large files
  • Free version available; a registered license is $25. There’s also a BareTailPro, which is packed with even more features and offers a free demo.

LogTail

LogTail

Developed by John Walker of Fourmilab (@Fourmilab), this tool doesn’t appear to have been supported in a long time (the website is dated 1997) and may not play well with the latest distros of Unix. You have been warned.

  • Allows you to monitor multiple log files on multiple servers at once
  • Automatically checks if the monitored process has spawned a fresh log file and adjusts monitoring accordingly
  • Old script (circa 1997) may not play well with newer Unix distros/Perl patches (built withi Perl 4.0, patch level 36)
  • UNIX only

 

TraceTool error logging

TraceTool

Developed by Thierry Parent, TraceTool is a great option for .NET developers needing to build their own log tailing feature. The code comes with a lot of power and features, but it might not be as good a choice if you simply want to run a quick program and be off on your merry way. You can also check it out on SourceForge.

  • Powerful tool with lots of customizability via code.
  • There is a learning curve, depending on how far you want to take TraceTool. Check out the “TraceTool Overview” screenshot on the CodeProject page and you’ll see what I mean.
  • Not a quick fix. If you simply want to open a quick executable and see the tail of a few log files, pick something else. This will take you some time to set up and get configured.
  • With great power comes great responsibility. The CodeProject discussion section has comments from users experiencing several different kinds of problems. Yet despite any problems users have, they consistently ranked the page highly, with an average vote of 4.97 at the time of this writing.
  • Source code is readily available for download, but you’ll need a .NET development environment setup to compile it.
  • Windows only.

 

SnakeTail

SnakeTail.net

SnakeTail.net is developed by SnakeNest. Looks can be deceiving – I thought this Google Project was long dead but was pleasantly surprised that bugs are still being fixed and features added as recently as September 2016.

  • Low memory & CPU footprint even with large files
  • Customizable shortcut keys to jump around files quickly
  • Can tail a log directory where the latest log files are stored
  • Windows only

 

image_thumb

Notepad++

Hardcore fans of Notepad++ (@Notepad_plus) often like to work in it all day, every day. Now you can tail a log file in Windows without ever leaving Notepad++ by using the Document Monitor plugin (granted – hardcore fans probably already know all about this!):

  1. Open Notepad++ then from the top menu select “Plugins > Plugin Manager > Show Plugin Manager,” then check the option for “Document Monitor,” then click “Install.”
  2. Notepad++ will prompt you to restart the program (not restart your computer).
  3. Upon opening Notepad++ again, select “Plugins” and you should now see the “Document Monitor > Start to monitor” option. This will refresh the view of your document automatically every 3 seconds.

 

image_thumb_11

inotail

Developed by Tobias Klauser (@t_klauser), inotail is a basic tool with minimal options compared to the others on this list. But for those of you yearning for simplicity, this just might be the log tail tool you’re looking for. The most recent update, however, is from back in 2009, and the most recent version, inotail 0.5, was released in 2007.

  • Git and GitHub repos available
  • Uses the inotify API to determine if a file needs to be reread

 

tail windows log files

Tail for Win32

Developed by Paul Perkins, Tail for Win32 is a Windows version of the UNIX ‘tail’ command, providing a quick and dirty way to use the Unix Tail command you’re used to on Windows systems. Many folks might consider this completely unnecessary on a windows system with the prevalence of Powershell these days, but it does provide a couple nice features you wouldn’t otherwise have:

  • Highlighted keyword matching
  • Can send email notifications via SMTP or MAPI when keywords are found
  • The ability to watch multiple files in real-time simultaneously
  • Can process files of any size on both local and networked drives
  • Download Tail for Win32 on SourceForge

 

JLogTailer

JLogTailer

JLogTailer is a Java log tailer with regular expression features that makes it possible to view any log file entries in real-time. It’s an easy-to-use tool that’s helpful when you need to see what’s going into the end of your log files as it happens while you tinker with your code. Additionally, you can use JLogTailer to tail multiple log files if needed.

  • View log entries in real-time
  • Simple to use
  • Works for tailing multiple log files

WebTail

WebTail

With so many programs generating log files, other tools are useful when you have direct access to the file system that stores each log. But what if you don’t have direct access to your log file? That’s where WebTail comes in, enabling you to tail any log file that can be viewed via a web server. In addition to allowing you to view entries the moment they’re appended to log files, WebTail requires less bandwidth than you’d otherwise use to download the entire file several times.

  • Useful for tailing log files stored on web servers
  • Save bandwidth by eliminating the need to download complete files multiple times
  • Tail any log file viewable via a web server

MakeLogic

MakeLogic Tail

An advanced tail -f command with GUI, MakeLogic Tail is the tail for Windows. It can be used to monitor the log files of various servers and comes with a variety of other intuitive and useful features.

  • Shows the last several lines of growing log files
  • Real-time monitoring
  • Requires JRE 5.0
  • Easy to use GUI
  • Search current and all open log files
  • Highlight select keywords in various colors
  • Monitor most recently opened documents
  • Cascade or Tile the log file windows

Wintail

Wintail

Wintail is a free program created by Andy Hofle after he struggled with viewing log files in real-time with Windows Notepad. There was no program like tail -f in UNIX for Windows at the time, so he wrote his own. It’s a useful tool enabling you to have multiple tiled windows open simultaneously, and you can also pause updates to examine files more closely when needed.

  • Supports large files over 2GB
  • Error highlighting
  • Drag-and-drop files you wish to monitor into the title or menu bar to open them
  • Create shortcuts to frequently-monitored files
  • Added support for Windows Server 2012 in November 2015

LogExpert

LogExpert

If you’re looking for an intuitive, easy-to-use tail application for Windows, LogExpert is a solid option offering search functionality, bookmarking, filter views, highlighting, time stamps, and other useful features that make monitoring a less frustrating task.

  • Search functions including RegEx
  • Flexible filter view
  • Columnizers
  • Highlight lines using search criteria
  • Supports Unicode, SFTP, log4j XML files, and third-party plugins
  • Plugin API for more log file data sources

glogg

glogg

A multi-platform GUI application for browsing and searching through long, complex log files, glogg can be described as a graphical, interactive combination of grep and less. An open-source tool released under the GPL by Nicolas Bonnefon, glogg functions on both Windows and Mac, making it a functional tool for any developer.

  • Use regular expressions to search for events in log files
  • Designed to help you spot and understand problems in the massive logs generated by embedded systems
  • Also useful for sysadmins searching through logs on databases or servers
  • Second window shows results from the current search
  • Supports grep/egrep like regular expressions
  • Colorizes logs and search results
  • Follows logs written to a disk in real-time

Source: https://stackify.com/13-ways-to-tail-a-log-file-on-windows-unix/