Contents
Start the Compiler from the Eclipse* CDT
Start the Compiler from the Command Line
Find Composer XE Documentation
Disclaimer and Legal Information
Tutorials

The Intel® C++ Composer XE 2013 for Linux* OS compiles C and C++ source files on Linux* operating systems. The compiler and debugger are supported on IA-32 and Intel® 64 architectures.
The Intel® C++ Composer XE 2013 for Linux* OS has tutorials with step by step instructions with sample code that you can compile into an application using the Intel compiler. Try out the compiler by using the source code from a tutorial.
If you need help getting started with this product, go to the Software Developer Support site where you can browse the knowledge base, ask user community experts and get additional help from Intel.
Before you can use the compiler, you must first set the environment variables by running the compiler environment script compilervars.sh or compilervars.csh
with an argument that specifies the target architecture.
The following procedure uses the compilervars.sh script:
Open a terminal session.
Run the compiler environment script compilervars.sh:
source <install-dir>/bin/compilervars.sh <arg>
where <install-dir> is the directory structure containing the compiler /bin
directory, and <arg> is one of the following architecture arguments:
ia32: Compilers and libraries for IA-32 architectures only
intel64: Compilers and libraries for Intel® 64 architectures only
Note The default path for <install-dir> is /opt/intel/.
Intel® C++ Compiler 13.1 for Linux* OS provides an integration, also known as an extension, to Eclipse* and C/C++ Development Toolkit (CDT) that lets you develop, build, and run your Intel C/C++ projects in a visual, interactive environment. CDT is layered on Eclipse* and provides a C/C++ development environment perspective.
Note
Eclipse* and CDT are not bundled with the Intel® C++ Compiler 13.1. You must obtain them separately.
You must first install and configure Eclipse* on your system and then configure Eclipse* to use the Intel® C++ Compiler 13.1. To install Eclipse*, refer to the Eclipse* documentation.
To configure Eclipse* to use the Intel compiler, follow these steps:
Start Eclipse*.
Select Help > Install New Software.
Next to the Work with field, click the Add button. The Add Site dialog opens.
Click the Local
button and browse to the <install_dir>/composer_xe_2013.0.xxx/eclipse_support/cdt8.0/eclipse
directory.
Note The default path for <install-dir> is /opt/intel/.
Click OK.
Deselect Group items by category.
Select the options beginning with Intel, and click Next.
Follow the installation instructions.
When asked if you want to restart Eclipse*, select Yes.
When Eclipse* restarts, you can create and work with CDT* projects that use the Intel® C++ compiler.
To invoke the compiler from Eclipse*:
Open your project.
Select the project in the Project Explorer.
Select Project > Build Project.
Before you can use the compiler, you must first set the environment variables as described above in Set the Environment Variables.
To invoke the Intel® C++ Compiler from the command line:
For C source files, use a command similar to the following:
icc my_source_file.c
For C++ source files, use a command similar to the following:
icpc my_source_file.cpp
Following successful compilation, the compiler creates an executable file in the current directory.
The Intel® Debugger is available in a graphical environment and as a command line tool. The graphical environment is a Java* application and requires the Java Runtime Environment (JRE).
Before you can use the graphical environment or command line debugger, you must first set the environment variables and then start the debugger.
Open a terminal session.
Set the environment variables as described above in Set the Environment Variables.
Enter one of the following commands:
idb to start the debugger in GUI mode
idbc to start the debugger in command line mode
The following tutorials include sample code that demonstrates the features of the compiler.
| Using the Intel® MIC Architecture | A system with the Intel® Many Integrated Core Architecture (Intel® MIC Architecture)
can run your application on both the CPU and the coprocessor. The application starts at the CPU
with user-defined sections of the source code offloaded to the coprocessor.
In this tutorial, you will compile the sample source code into an application that runs on both the CPU and the coprocessor. You will then examine the source code to see how you can define sections to run on both the host CPU and the coprocessor. Note You will need a system with the Intel® MIC Architecture to complete this tutorial. |
| Using Auto Vectorization | The auto-vectorizer detects operations in
the application that can be done in parallel and converts sequential operations to parallel
operations by using the Single Instruction Multiple Data (SIMD)
instruction set.
In this tutorial, you will be introduced to adding parallelism to your serial application by using the auto-vectorizer to improve the performance of the sample code. You will then compare the performance of the serial version and the version that was compiled with the auto-vectorizer. |
| Using Guided Auto Parallelism | Guided auto parallelism offers selective advice that you can implement on your application. In this tutorial, you will be introduced to using guided auto parallelism by invoking the advice specified in the guided auto parallelism report. You will then see the performance difference between the serial version and the version that uses the advice provided by the guided auto parallelism feature. |
| Threading Your Applications | Intel® C++ Composer XE 2013 has several software features that can improve the performance of your serial applications by using parallel processing. Open Multi-Processing (OpenMP*) is an API that supports multi-platform shared-memory parallel programming in all architectures. Intel® Threading Building Blocks (TBB) provides common parallel algorithm patterns in the form of function templates. Intel® Cilk™ Plus adds parallelism to new or existing programs. In this tutorial, you will be introduced to threading your application by compiling a version using OpenMP, Intel® TBB, and Intel® Cilk™ Plus. You will then see the performance difference between the serial version and versions using these features. |
| Using Intel® Math Kernel Library for Matrix Multiplication | Intel® Math Kernel Library (Intel MKL) implements many types of operations for performing math computations. In this tutorial, you will use Intel MKL to multiply matrices, measure the performance of matrix multiplication, and control threading. |
You can find documentation on the following:
| Intel® C++ Compiler XE 13.1 User and Reference Guides | This document shows you how to compile your application, how to optimize your application by using optimization tools and other libraries, and describes all of the compiler options. The Intel® C++ Compiler includes man page information. You can view the
man page information by first setting the environment variables,
as described in Set the Environment Variables, and then typing The compiler documentation also includes man pages detailing the code coverage tool ( Read a summary of compiler options from the command line by invoking the compiler with the |
| Intel® Debugger Documentation | This document contains the user guide for the Intel® debugger that you can use to debug your code. |
| Intel® Integrated Performance Primitives Documentation | These documents contain the user guide for an extensive library of multicore-ready, highly-optimized software functions that you can use for multimedia data processing and communications applications. |
| Intel® Math Kernel Library Documentation | These documents contain the user guide for a library with optimized and scalable math functions. You can use these functions to create applications with maximum performance and seamlessly provide forward scaling from current to future many-core platforms. |
| Intel® Threading Building Blocks Documentation | These documents contain the user and reference guides for a C++ template library that you can use to create reliable, portable and scalable parallel applications. |
| Release Notes | This document contains the most up-to-date information about the product:
|
| Included Samples | This document contains a list of sample projects for use with the compiler. The samples illustrate compiler optimizations, features, tools, and programming concepts. |
| Additional Learning Resources | Internet site with additional resources to help you use this product. To search for additional learning resources on other Intel® software products, go to the Intel® Learning Lab. |
| Intel® Software Documentation Library | Internet site with documentation for other Intel software products. |
INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
A "Mission Critical Application" is any application in which failure of the Intel Product could result, directly or indirectly, in personal injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS.
Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.
The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or by visiting Intel's Web Site.
Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. See http://www.intel.com/products/processor_number for details.
Intel, Intel Atom, and Intel Core are trademarks of Intel Corporation in the U.S. and/or other countries.
* Other names and brands may be claimed as the property of others.
Java is a registered trademark of Oracle and/or its affiliates.
Copyright © 2013, Intel Corporation. All rights reserved.
Document number: 326976-001US