Skip navigation
  •  Inicio
  • UDC 
    • Cómo depositar
    • Políticas del RUC
    • FAQ
    • Derechos de autor
    • Más información en INFOguías UDC
  • Listar 
    • Comunidades
    • Buscar por:
    • Fecha de publicación
    • Autor
    • Título
    • Materia
  • Ayuda
    • español
    • Gallegan
    • English
  • Acceder
  •  Español 
    • Español
    • Galego
    • English
  
Ver ítem 
  •   RUC
  • 1. Investigación
  • Grupos de investigación
  • Grupo de Arquitectura de Computadores (GAC)
  • GI-GAC - Artigos
  • Ver ítem
  •   RUC
  • 1. Investigación
  • Grupos de investigación
  • Grupo de Arquitectura de Computadores (GAC)
  • GI-GAC - Artigos
  • Ver ítem
JavaScript is disabled for your browser. Some features of this site may not work without it.

On processing extreme data

Thumbnail
Ver/Abrir
D.Petcu_On_Processing_Extreme_Data_2015.pdf (294.2Kb)
Use este enlace para citar
http://hdl.handle.net/2183/20948
Colecciones
  • GI-GAC - Artigos [147]
Metadatos
Mostrar el registro completo del ítem
Título
On processing extreme data
Autor(es)
Petcu, Dana
Iuhasz, Gabriel
Pop, Daniel
Talia, Domenico
Carretero, Jesús
Prodan, Radu
Fahringer, Thomas
Grasso, Ivan
Doallo, Ramón
Martín, María J.
Fraguela, Basilio B.
Trobec, Roman
Depolli, Matjaz
Almeida Rodriguez, Francisco
Sande, Francisco de
Da Costa, Georges
Pierson, Jean-Marc
Anastasiadis, Stergios
Bartzokas, Aristides
Lolis, Christos
Gonçalves, Pedro
Brito, Fabrice
Brown, Nick
Fecha
2016
Cita bibliográfica
Petcu, D., Iuhasz, G., Pop, D., Talia, D., Carretero, J., Prodan, R., ... & Fraguela, B. B. (2016). On processing extreme data. Scalable Computing: Practice and Experience, 16(4), pp-467.
Resumen
[Abstract] Extreme Data is an incarnation of Big Data concept distinguished by the massive amounts of data that must be queried, communicated and analyzed in near real-time by using a very large number of memory or storage elements and exascale computing systems. Immediate examples are the scientific data produced at a rate of hundreds of gigabits-per-second that must be stored, filtered and analyzed, the millions of images per day that must be analyzed in parallel, the one billion of social data posts queried in real-time on an in-memory components database. Traditional disks or commercial storage nowadays cannot handle the extreme scale of such application data. Following the need of improvement of current concepts and technologies, we focus in this paper on the needs of data intensive applications running on systems composed of up to millions of computing elements (exascale systems). We propose in this paper a methodology to advance the state-of-the-art. The starting point is the definition of new programming paradigms, APIs, runtime tools and methodologies for expressing data-intensive tasks on exascale systems. This will pave the way for the exploitation of massive parallelism over a simplified model of the system architecture, thus promoting high performance and efficiency, offering powerful operations and mechanisms for processing extreme data sources at high speed and/or real time.
Palabras clave
Extreme data
HPC
Exascale systems
Extreme computing
Parallel programming models
Scalable data analysis
 
Versión del editor
https://doi.org/10.12694/scpe.v16i4.1134
ISSN
1895-1767

Listar

Todo RUCComunidades & ColeccionesPor fecha de publicaciónAutoresTítulosMateriasEsta colecciónPor fecha de publicaciónAutoresTítulosMaterias

Mi cuenta

AccederRegistro

Estadísticas

Ver Estadísticas de uso
Sherpa
OpenArchives
OAIster
Scholar Google
UNIVERSIDADE DA CORUÑA. Servizo de Biblioteca.    DSpace Software Copyright © 2002-2013 Duraspace - Sugerencias