Monday, 10 August 2015

Ajenti: Web based Control panel for managing linux server


Ajenti is an open-source web based system Management control panel for managing remote system administrating tasks from the browser similiar to Webmin module. It is a much powerful and lightweighted tool, that provides fast and responsive web interface for managing small server setups and best suitable for VPS and dedicated servers.

It can install packages and run commands, and you can view basic server information such as RAM in use, free disk space, etc. All this can be accessed from a web browser.
Optionally, an add-on package called Ajenti V allows you to manage multiple websites from the same control panel.

Pre-requisites for Ajenti:

1. Registered Domain name
2. Setup hostname on the Server on which you are going to setup the Ajenti CP.
3. A non-root user with sudo privileges.

Installing Ajenti


On your server, as a user with sudo access, first add the repository key. This is used to validate the sources of the Ajenti packages you will be installing.
 $ wget http://repo.ajenti.org/debian/key -O- | sudo apt-key add -  

Then add the actual repository to your sources list:

 $ echo "deb http://repo.ajenti.org/ng/debian main main ubuntu" | sudo tee -a /etc/apt/sources.list  

Now you can update your packages and begin the install process by running:
 $ sudo apt-get update && sudo apt-get install ajenti  

When it prompts you to continue, type Y and press ENTER. The install process may take a few minutes. After the process is over, start the Ajenti server:
 $ sudo service ajenti restart  

Configuring Ajenti

Now, you ajenti is installed, need to configure and it will by default take all the default metrics of server. Open a web browser and browse to https://panel.your_domain_name:8000/ or https://IP:8000/.

Log in with these default credentials:

Username: root

Password: admin

You will now be inside your new Ajenti control panel.








Plugins


Ajenti already has a lot of functionality built in by default, but if you want even more settings and configurable items in your panel, you can check out the Plugins section. Some plugins are enabled by default, while others aren't, usually due to unsatisfied dependancies.

You can install disabled plugins by clicking on them in the Plugins menu and pressing the button next to the dependency it requires. Otherwise, if you later install an application manually and Ajenti has a plugin for, you can restart Ajenti and the corresponding menu should appear next time you log in.



System Management


Under the System section in the sidebar, there's a plethora of configurable items to choose from. You can manage hard drives with the Filesystems menu.






Congratulations: Ajenti control panel is installed on the server. Now, use it to manage server.




Saturday, 8 August 2015

Mail Server: OFBiz with Mysql Setup

Apache OFBiz™ is an open source product for the automation of enterprise processes that includes framework components and business applications for ERP (Enterprise Resource Planning), CRM (Customer Relationship Management), E-Business / E-Commerce, SCM (Supply Chain Management), MRP (Manufacturing Resource Planning), MMS/EAM (Maintenance Management System/Enterprise Asset Management), POS (Point Of Sale).

Apache OFBiz is licensed under the Apache License Version 2.0 and is part of The Apache Software Foundation.

Step 1: Install JAVA

Java is the primary requirement for installing Apache OFBiz. It required minimum Java 1.6 to installed on your system. Make sure you have Java installed with proper version on your system.
 # java -version  
If you do not Java installed, Use below tutorial to install java else ignore it.

For Ubuntu, Debian and LinuxMint Users – Install JAVA 7 or Install JAVA 8

Step 2: Install Mysql and Setup for OFBiz

MySQL is a powerful database management system used for organizing and retrieving data

To install MySQL, open terminal and type in these commands:
 sudo apt-get install mysql-server libapache2-mod-auth-mysql php5-mysql  

During the installation, MySQL will ask you to set a root password. If you miss the chance to set the password while the program is installing, it is very easy to set the password later from within the MySQL shell.

Once you have installed MySQL, we should activate it with this command:
 sudo mysql_install_db  

Creating Databases and user access for OFBiz

We will be creating databases and user access which will be used with ofbiz to store the data. Below are the commands:
 mysql>create database ofbiz;  
 mysql>create database ofbizolap;  
 mysql>create database ofbiztenant;  
 mysql>use mysql;  
 mysql>select database();  
 mysql>create user ofbiz@localhost;  
 mysql>create user ofbizolap@localhost;  
 mysql>create user ofbiztenant@localhost;  
 mysql>update user set password=PASSWORD("ofbiz") where User='ofbiz';  
 mysql>update user set password=PASSWORD("ofbizolap") where User='ofbizolap';  
 mysql>update user set password=PASSWORD("ofbiztenant") where User='ofbiztenant';  
 mysql>grant all privileges on *.* to 'ofbiz'@localhost identified by 'ofbiz';  
 mysql>grant all privileges on *.* to 'ofbizolap'@localhost identified by 'ofbizolap';  
 mysql>grant all privileges on *.* to 'ofbiztenant'@localhost identified by 'ofbiztenant';  

Step 3: Download Apache OFBiz from SVN


We will be using SVN download latest Apache OFBiz files. first make sure you have svn client installed on system after that checkout the latest build from the subversion repository of OFBiz.
 # cd /opt/  
 # yum install subversion  
 # svn co http://svn.apache.org/repos/asf/ofbiz/trunk apache-ofbiz  

Step 4: Integrating Apache OFBiz with Mysql

 Before we build Apache OFBiz, we will be integrating it with Mysql
 1. Firstly, will be needing the JDBC connector: mysql-connector-java-5.1.14-bin.jar - to be placed in <ofbiz-dir>/framework/entity/lib/jdbc
 2. Create a backup of apache-ofbiz/framework/entity/config/entityengine.xml
 3. Edit entityengine.xml as follows:
 a. Add the following datasources below the datasource 'localmysql'

 <datasource name="localmysqlolap"  
     helper-class="org.ofbiz.entity.datasource.GenericHelperDAO"  
     field-type-name="mysql"  
     check-on-start="true"  
     add-missing-on-start="true"  
     check-pks-on-start="false"  
     use-foreign-keys="true"  
     join-style="ansi-no-parenthesis"  
     alias-view-columns="false"  
     drop-fk-use-foreign-key-keyword="true"  
     table-type="InnoDB"  
     character-set="latin1"  
     collate="latin1_general_cs">  
   <read-data reader-name="seed"/>  
   <read-data reader-name="seed-initial"/>  
   <read-data reader-name="demo"/>  
   <read-data reader-name="ext"/>  
   <inline-jdbc  
       jdbc-driver="com.mysql.jdbc.Driver"  
       jdbc-uri="jdbc:mysql://127.0.0.1/ofbiz?autoReconnect=true"  
       jdbc-username="ofbizolap"  
       jdbc-password="ofbizolap"  
       isolation-level="ReadCommitted"  
       pool-minsize="2"  
       pool-maxsize="250"  
       time-between-eviction-runs-millis="600000"/>  
 </datasource>  
 <datasource name="localmysqltenant"  
     helper-class="org.ofbiz.entity.datasource.GenericHelperDAO"  
     field-type-name="mysql"  
     check-on-start="true"  
     add-missing-on-start="true"  
     check-pks-on-start="false"  
     use-foreign-keys="true"  
     join-style="ansi-no-parenthesis"  
     alias-view-columns="false"  
     drop-fk-use-foreign-key-keyword="true"  
     table-type="InnoDB"  
     character-set="latin1"  
     collate="latin1_general_cs">  
   <read-data reader-name="seed"/>  
   <read-data reader-name="seed-initial"/>  
   <read-data reader-name="demo"/>  
   <read-data reader-name="ext"/>  
   <inline-jdbc  
       jdbc-driver="com.mysql.jdbc.Driver"  
       jdbc-uri="jdbc:mysql://127.0.0.1/ofbiz?autoReconnect=true"  
       jdbc-username="ofbiztenant"  
       jdbc-password="ofbiztenant"  
       isolation-level="ReadCommitted"  
       pool-minsize="2"  
       pool-maxsize="250"  
       time-between-eviction-runs-millis="600000"/>  
 </datasource>  

b. Replace derby with mysql in default, default-no-eca and test delegators as follows:

 <delegator name="default" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main" distributed-cache-clear-enabled="false">  
   <group-map group-name="org.ofbiz" datasource-name="localmysql"/>  
   <group-map group-name="org.ofbiz.olap" datasource-name="localmysqlolap"/>  
   <group-map group-name="org.ofbiz.tenant" datasource-name="localmysqltenant"/>  
 </delegator>  
 <delegator name="default-no-eca" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main" entity-eca-enabled="false" distributed-cache-clear-enabled="false">  
   <group-map group-name="org.ofbiz" datasource-name="localmysql"/>  
   <group-map group-name="org.ofbiz.olap" datasource-name="localmysqlolap"/>  
   <group-map group-name="org.ofbiz.tenant" datasource-name="localmysqltenant"/>  
 </delegator>  
 <delegator name="test" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main">  
   <group-map group-name="org.ofbiz" datasource-name="localmysql"/>  
   <group-map group-name="org.ofbiz.olap" datasource-name="localmysqlolap"/>  
   <group-map group-name="org.ofbiz.tenant" datasource-name="localmysqltenant"/>  
 </delegator>   

Save the file.

Step 5: Installing Apache OFBiz

 # cd /opt/apache-ofbiz/  
 # ./ant  

Step 6: Install Dataset, Load Demo and Seed Data

Apache OFBiz provides dataset, demo data and seed data, this data is useful for experiment. This data is unuseful for production setup.

 # ./ant load-demo  
 # ./ant load-extseed  

Step 7: Start Apache OFBiz Service

After installing Apache OFBiz, Use following command to start Apache OFBiz service on system.

 # ./ant start  

Access Apache OFBiz in Browser


Access Apache OFBiz in browser on port 8443 as below given url and login credentials.

 URL: https://svr10.tecadmin.net:8443/myportal/control/main  
 Admin Username: admin  
 Admin Password: ofbiz  

References:

http://ofbiz.apache.org/







Congratulation’s! You have successfully install apache OFBiz with Mysql integration on your Linux system.

Thursday, 6 August 2015

How to Install JAVA 8 (JDK 8u45) on Ubuntu & LinuxMint Via PPA


This blog post will help you to Install Oracle JAVA 8 (JDK/JRE 8u25) on Ubuntu 14.04 LTS, 12.04 LTS and 10.04 and LinuxMint systems using PPA File.

Installing Java 8 on Ubuntu

Add the webupd8team Java PPA repository in your system and install Oracle Java 8 using following set of commands.
 $ sudo add-apt-repository ppa:webupd8team/java  
 $ sudo apt-get update  
 $ sudo apt-get install oracle-java8-installer  


Verify Installed Java Version

After successfully installing oracle Java using above step verify installed version using following command.
 manish@tech:~$ java -version  
 java version "1.8.0_45"  
 Java(TM) SE Runtime Environment (build 1.8.0_45-b14)  
 Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)  


Configuring Java Environment

In Webupd8 ppa repository also providing a package to set environment variables, Install this package using following command.
 $ sudo apt-get install oracle-java8-set-default  

Reference:-

https://launchpad.net/~webupd8team/+archive/ubuntu/java



Monday, 25 May 2015

ELK Stack: Collection and Analysis of Centralized Logs


As a DevOps, it is very difficult to identify, collect and process the logs which could of System logs, Web server logs etc.  
The one of the solution of above problem could be ELK stack.

ELK stack mainly composed of 3 components:
  • Elasticsearch: Store the logs collected from different servers
  • Logstash: Collect all types of logs at centralized location
  • Kibana:  Runs the searchs and generate visual dashboard
How ELK works ?

Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch. All these  together are called ELK Stack.
Using Elasticsearch as a backend datastore, and kibana as a frontend reporting tool, logstash as pipeline for storing, querying and analyzing your logs.

ELK Stack Setup has mainly four components:
  • Logstash: Server component which collects all the incoming logs
  • Elasticsearch: stores all the logs
  • Kibana: Web interface for searching and analysing logs
  • Logstash Forwarder: Installed on server from which logs need to be collected. It acts as an agent uses lumberjack networking protocol to communicate with logstash.
First requirement in order to setup ELK stack is to have java installed on the system.
It is recommended to run a recent version of Java in order to ensure the greatest success in running Logstash.

It’s fine to run an open-source version such as OpenJDK:http://openjdk.java.net/
Or you can use the official Oracle version:http://www.oracle.com/technetwork/java/index.html

Install Elasticsearch 

Run the below command to get repo from apt
 sudo wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -  

Create the Elasticsearch source list
 sudo echo 'deb http://packages.elasticsearch.org/elasticsearch/1.1/debian stable main' | sudo tee /etc/apt/sources.list.d/elasticsearch.list  

Update your apt package database:
 sudo apt-get update  

Now, we have the source to download elasticsearch, give the below command to install it
 sudo apt-get -y install elasticsearch=1.1.1  

Once, it is installed, we need to add below line to /etc/elasticsearch/elasticsearch.yml, to disable dynamic scripts.
script.disable_dynamic: true

In order to restrict the outside access to Elasticsearch instance(9200) by specifing the below line in configuration file. 
network.host: localhost

If you want to access you Elasticsearch instance from clients on a different IP address via Javascript, add the following inside elasticsearch.yml

http.cors.enabled: true
http.cors.allow-origin: "*"

Once you are done with changes, save the file and restart the service
 sudo service elasticsearch restart  

In order to boot up the Elasticsearch
 sudo update-rc.d elasticsearch defaults 95 10  

Now your Elasticsearch is all set to store for your log from multiple servers. 

Testing Elasticsearch:
Use the below commands to test the working of elasticsearch
 ps aux | grep elasticsearch  
 curl -X GET 'http://localhost:9200'  
 curl 'http://localhost:9200/_search?pretty'  


Elasticsearch Kopf Plugin

The kopf plugin provides an admin GUI for Elasticsearch. It helps in debugging and managing clusters and shards. It’s really easy to install:
 sudo /usr/share/elasticsearch/bin/plugin -install lmenezes/elasticsearch-kopf  

Below is the screenshot for your understanding:



Install Kibana

Use the below link to download the build to install kibana
 sudo wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.1.tar.gz  

Extract the build,
 sudo tar xvf kibana-3.0.1.tar.gz  

Now, we need to open the kibana configuration to specify the port for elasticsearch from 9200 to 80, so we will later setup proxy server which could be nginx or squid etc. to redirect all request from port 80 to 9200. See below line in location ~/kibana/config.js

elasticsearch: "http://"+window.location.hostname+":80",

This is necessary as kibana will be running on 80 port.
Since, we are going to install nginx, we need to specify particular location for kibana say.
/opt/kibana3/

Copy all the contents to this location:

 cp -R ~/kibana/* /opt/kibana3/  

Now, we hava setup the kibana, as discussed we need to install nginx, which is pretty easy.
Install Nginx

Use below command to install nginx
 sudo apt-get install nginx  

Next, download the Nginx configuration from kibana's github repositories to your home dir.
 sudo wget https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf  

In above configuration we need to specify the FQDN (Fully Qualified Domain Name), and Kibana's root directory.
server_name FQDN;
root /root/kibana3;

Once you are done with changes move the file to nginx configuration location
 sudo cp nginx.conf /etc/nginx/sites-available/default  

In order to improve your security to access nginx, you can apache2-utils to use htpasswd to generate username and password.
 sudo apt-get install apache2-utils  
 sudo htpasswd -c /etc/nginx/conf.d/kibana.htpasswd user  

Now, you can restart the nginx service for the changes to take effect.

Kibana is now accessible via your FQDN or the public IP address of your Logstash Server i.e. http://logstash_server_public_ip/. If you go there in a web browser, you should see a Kibana welcome page which will allow you to view dashboards but there will be no logs to view because Logstash has not been set up yet. Let's do that now.


Install Logstash

The Logstash package is available from the same repository as Elasticsearch. As public key is already setup, let's create the source list
 sudo echo 'deb http://packages.elasticsearch.org/logstash/1.4/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list  

Update your apt package database
 sudo apt-get update  

Install the logstash using below command:
 sudo apt-get install logstash=1.4.2-1-2c0f5a1  

Generate SSL certificates:

Since we are going to use Logstash Forwarder to ship logs from our Servers to our Logstash Server, we need to create an SSL certificate and key pair. The certificate is used by the Logstash Forwarder to verify the identity of Logstash Server. Create the directories that will store the certificate and private key with the following commands:
 sudo mkdir -p /etc/pki/tls/certs  
 sudo mkdir /etc/pki/tls/private  

If you have a DNS setup with your private networking, you should create an A record that contains the Logstash Server's private IP address—this domain name will be used in the next command, to generate the SSL certificate. Use the below command to generate it:

 sudo cd /etc/pki/tls; sudo openssl req -subj '/CN=logstash_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt  

Since, ssl certificate is generated, we need to transfer it to server from which we are going to ships logs to logstash.

Configure Logstash

Now, lets configure the logstash so that kibana can get the log from it.

Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.
Let's create a configuration file called 01-lumberjack-input.conf and set up our "lumberjack" input (the protocol that Logstash Forwarder uses):
Create a file at location /etc/logstash/conf.d/01-lumberjack-input.conf
which contains
input {
          lumberjack {
                              port => 5000
                              type => "logs"
                              ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
                              ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
                            }
         }

Now let's create a configuration file called 10-syslog.conf, where we will add a filter for syslog messages at location /etc/logstash/conf.d/10-syslog.conf

filter {
        if [type] == "syslog" {
                                          grok {
                                                    match => { "message" => "%                         {SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %     {DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}"                                                     }
                                            add_field => [ "received_at", "%{@timestamp}" ]
                                            add_field => [ "received_from", "%{host}" ]
                                            }
                                            syslog_pri { }
                                            date {
                                            match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
                                                     }
                                        }
     }

This filter looks for logs that are labeled as "syslog" type (by a Logstash Forwarder), and it will try to use "grok" to parse incoming syslog logs to make it structured and query-able at location /etc/logstash/conf.d/30-lumberjack-output.conf

output {
            elasticsearch { host => localhost }
            stdout { codec => rubydebug }
            }
This output basically configures Logstash to store the logs in Elasticsearch.
With this configuration, Logstash will also accept logs that do not match the filter, but the data will not be structured (e.g. unfiltered Nginx or Apache logs would appear as flat messages instead of categorizing messages by HTTP response codes, source IP addresses, served files, etc.).

Now Restart the logstash to make changes take effect:
 sudo service logstash restart  

Setup Logstash Forwarder Package:

Get the SSL certificate which we generated previously to server from which we get the logs
 sudo scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp  

Install Logstash-forwarder Package:

On Server, create the Logstash Forwarder source list:
 sudo echo 'deb http://packages.elasticsearch.org/logstashforwarder/debian stable main' | sudo tee /etc/apt/sources.list.d/logstashforwarder.list  

It also uses the same GPG key as Elasticsearch, which can be installed with this command:
 sudo wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -  

Then install the Logstash Forwarder package:
 sudo apt-get install logstash-forwarder  


Next, you will want to install the Logstash Forwarder init script, so it starts on bootup:
 sudo cd /etc/init.d/; sudo wget https://raw.githubusercontent.com/elasticsearch/logstash-forwarder/a73e1cb7e43c6de97050912b5bb35910c0f8d0da/logstash-forwarder.init -O logstash-forwarder  
 sudo chmod +x logstash-forwarder  
 sudo update-rc.d logstash-forwarder defaults  

Now copy the SSL certificate into the appropriate location (/etc/pki/tls/certs):
 sudo mkdir -p /etc/pki/tls/certs  
 sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/  


Configure Logstash Forwarder:

We need to specify in logstash forwarder configuration details of the logstash server so that it can send the details to server.

Now add the following lines into the file, substituting in your Logstash Server's private IP address for logstash_server_private_IP at location /etc/logstash-forwarder:

{
        "network": {
                            "servers": [ "logstash_server_private_IP:5000" ],
                            "timeout": 15,
                            "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
                        },
        "files": [
                    {
                    "paths": [
                                    "/var/log/syslog",
                                    "/var/log/auth.log"
                                    ],
                    "fields": { "type": "syslog" }
                    }
                ]
}

Now, restart the logstash forwarder using below command

 sudo service logstash-forwarder restart  

Now, we have setup ELK stack, we just need to check the kibana dashboard for more details.

Below are the useful screenshots which represents the data which we got from server.


Conclusion:-


The ELK stack is very useful in different manners as in this case you will find almost all the logs that could be application log, webserver log, system log etc. Using kibana, will have a visualize report which shows the analysis of the logs which is very helpful to extract the required information. 
The ELK stack is easy to maintain on the large scale systems and and large scale infrastructure.

Sunday, 24 May 2015

Configure S3 bucket to enable CORS configuration

In order to configure s3 to allow cross-origin sharing, we need to create CORS configuration, xml document with rules that identify the origins that will allow to access your bucket, the operation(HTTP methods) will support for each origin according to configuration mentioned in S3.

In CORS configuration, you can specify following values for the AllowedMethod element.
  • GET
  • PUT
  • POST
  • DELETE
  • HEAD
While defining we must understand the following rules:
  • First we need to specify the AllowMethod (GET, PUT, DELETE Etc) request on the first origin . The rule also allows all headers in a preflight OPTIONS request through the Access-Control-Request-Headers header.
  • Same rules in applied on another origin
  • The third rule allows cross-origin GET requests from all origins. The '*' wildcard character refers to all origins.
Configuring S3 bucket:

Within the properties of the bucket we have to choose the option “Edit CORS configuration”:





















In the window that appears, enter a configuration like this:






















This will accept any GET, PUT, POST or DELETE request for any element in the bucket, as long as the origin of the request is an element stored in a subdomain of example.com.


Technically, the bucket will respond to these requests with the header “Access-Control-Allow-Origin: *.example.com” when they are performed including the header Origin.

This way you can configure the s3 to enable the CORS configuration.

Sunday, 10 May 2015

CORS:- Cross Origin Resource Sharing

Earlier, developers having the difficulty in order to make a request to a different domain from javascript. Everyone setup proxies on their websites, which was one of the onset of a new host of open redirect problems, as a way to get around the restriction. Although developers were working around this limitation using server-side proxies as well as number of techniques.

Almost all browser now support CORS (Includes IE 8+, Firefox 3.5+, Chrome)

Cross-Origin Resource Sharing (CORS)


* It is W3C Working draft that defines how the browser and server must communicate when accessing sources across origin. It is specification recommended by the Web Applications Working Group within the W3C.It provides a way for script running in clients browsers to use the XMLHttpRequest API object and make direct HTTP Requests to resources on domains other than from where the script was first loaded. say Zuora REST API.

* CORS provides following features:
Guarantees the data integrity of the API request- Nothing chenaged on its way from your server to customer.
Provides authentication - ensures that the person who generated the request is who they say they are.
The Basic idea behind CORS is to use custome HTTP headers to allow both the browser and the server to know enough about each otherin order to determine if the request or response should suceed or fail.

For a simple request, one that uses either GET or POST with no custom headers and whose body is text/plain, the request is sent with extra header called Origin, The Origin header contains the origin (protocol, domain name, and port) of the requesting page, in order to make the request to serve.

An example Origin header looks like:

 Origin: http://www.domainA.com  

If the server decides that the request should be allowed, it sends a Access-Control-Allow-Origin header echoing back the same origin that was sent or "*" if it's a public resource. Say,

 Access-Control-Allow-Origin: http://www.domainB.net  

This way we can do sharing of resources against different origin.