French Gigaword 3rd edition LDC2011T10
David Graff and Ângelo Mendonça and Denise DiPersio

LDC2011T10_french_gigaword_3e.tar.zst2.08GB
Type: Dataset
Tags: Dataset, nlp, natural language, corpus, annotation, data, text, Annotated, LDC, gigaword, french

Bibtex:
@article{,
title= {French Gigaword 3rd edition LDC2011T10},
journal= {},
author= {David Graff and Ângelo Mendonça and Denise DiPersio},
year= {},
isbn= {1-58563-593-6},
islrn= {447-232-270-158-3},
url= {https://doi.org/10.35111/2fnv-vm59},
doi= {10.35111/2fnv-vm59},
ldc= {LDC2011T10},
abstract= {# French Gigaword Third Edition - Linguistic Data Consortium

### Introduction

French Gigaword Third Edition is a comprehensive archive of newswire text data that has been acquired over several years by the Linguistic Data Consortium (LDC) at the University of Pennsylvania. This third edition updates [French Gigaword Second Edition (LDC2009T28)](http://catalog.ldc.upenn.edu/LDC2009T28) and adds material collected from January 1, 2009 through December 31, 2010.

The two distinct international sources of French newswire in this edition, and the time spans of collection covered for each, are as follows:

-   Agence France-Presse(afp\_fre) May 1994 - Dec. 2010
-   Associated Press French Service (apw\_fre) Nov. 1994 - Dec. 2010

The seven-letter codes in parentheses include the three-character source name abbreviations and the three-character language code (fre) separated by an underscore (\_) character. The three-letter language code conforms to the [ISO 639-2/B](http://www.loc.gov/standards/iso639-2/) standard.

### Data

Each data file name consists of the 7-letter prefix plus another underscore character, followed by a 6-digit date (representing the year and month during which the file contents were generated by the respective news source), followed by a .gz file extension, indicating that the file contents have been compressed using the GNU gzip compression utility ([RFC 1952](http://datatracker.ietf.org/doc/rfc1952/)). So, each file contains all the usable data received by LDC for the given month from the given news source.

All text data are presented in SGML form, using a very simple, minimal markup structure all text consists of printable ASCII, white space, and printable code points in the Latin1 Supplement character table, as defined by the Unicode Standard (ISO 10646) for the accented characters used in French. The Supplement/accented characters are presented in UTF-8 encoding.

The file dtd/gigaword\_f.dtd in the dtd directory provides the formal Document Type Declaration for parsing the SGML content. The corpus has been fully validated by a standard SGML parser utility (nsgmls), using this DTD file.

The SGML structure for this release represents some notable differences relative to the markup strategy used in early (pre-Gigaword) LDC publications of newswire data these are intended to facilitate bulk processing of the present corpus. The major differences are:

-   Early corpora usually organized the data as one file per day, or limited the average file size to one megabyte (MB).

Typical compressed file sizes in the current corpus range from about 0.1 MB to about 10 MB this equates to a range of about 0.5 to 30 MB per file when the data are uncompressed. In general, these files are not intended for use with interactive text editors or word processing software (though many such programs are likely to work reasonably well with these files). Rather, its expected that the files will be used as input to programs that are geared to dealing with data in such quantities, for filtering, conditioning, indexing, statistical summary, etc.

-   Early corpora tended to use different markup outlines (different tag sets) depending on the data source the data source structural properties were generally preserved to the extent possible (even though many elements of the delivered structure may have been meaningless for research use).

The present corpus uses only the information structure that is common to all sources and serves a clear function: headline, dateline, and core news content (usually containing paragraphs). The dateline is a brief string typically found at the beginning of the first paragraph in each news story, giving the location the report is coming from, and sometimes the news service and/or date since this content is not part of the initial sentence, we separate it from the first paragraph (this was not done prior to the Gigaword corpora).

For all of the documents in this corpus, we have applied a rudimentary (and approximate) categorization of DOC units into four distinct types. The classification is indicated by the type=string attribute that is included in each opening DOC tag. The four types are:

-   **story** : This is by far the most frequent type, and it represents the most typical newswire item: a coherent report on a particular topic or event, consisting of paragraphs and full sentences.
-   **multi** : This type of DOC contains a series of unrelated blurbs, each of which briefly describes a particular topic or event this is typically applied to DOCs that contain summaries of todays news, news briefs in ... (some general area like finance or sports), and so on.
-   **advis** : (short for advisory) These are DOCs which the news service addresses to news editors -- they are not intended for publication to the end users (the populations who read the news).
-   **other** : This represents DOCs that clearly do not fall into any of the above types -- in general, items of this type are intended for broad circulation (they are not advisories), they may be topically coherent (unlike multi type DOCS), and they typically do not contain paragraphs or sentences (they are not really stories) these are things like lists of sports scores, stock prices, temperatures around the world, and so on.

The overall totals for each source are summarized below. Note that the Totl-MB numbers show the amount of data when the files are uncompressed (i.e. approximately 15 gigabytes, total) the Gzip-MB column shows totals for compressed file sizes, the K-wrds numbers are simply the number of white space-separated tokens (of all types) after all SGML tags are eliminated.

| Source | #Files | Gzip-MB | Totl-MB | K-wrds | #DOCs |
| --- | --- | --- | --- | --- | --- |
| afp\_fre | 195 | 1503 | 4255 | 641381 | 2356888 |
| apw\_fre | 194 | 489 | 1446 | 221470 | 801075 |
| TOTAL | 389 | 1992 | 5701 | 862851 | 3157963 |

### Sample

Please view this [sample](https://catalog.ldc.upenn.edu/desc/addenda/LDC2011T10.jpg).

### Metadata
- _Item Name:_ French Gigaword Third Edition
- _Author(s):_ David Graff, Ângelo Mendonça, Denise DiPersio
- _LDC Catalog No.:_ LDC2011T10
- _ISBN:_ 1-58563-593-6
- _ISLRN:_ 447-232-270-158-3
- _DOI:_ [https://doi.org/10.35111/2fnv-vm59](https://doi.org/10.35111/2fnv-vm59)
- _Release Date:_ September 15, 2011
- _Member Year(s):_ 2011
- _DCMI Type(s):_ Text
- _Data Source(s):_ newswire
- _Application(s):_ natural language processing, language modeling, information retrieval
- _Language(s):_ French
- _Language ID(s):_ fra
- _License(s):_ [LDC User Agreement for Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf)  
- _Online Documentation:_ [LDC2011T10 Documents](https://catalog.ldc.upenn.edu/docs/LDC2011T10/)
- _Licensing Instructions:_ [Subscription & Standard Members, and Non-Members](http://www.ldc.upenn.edu/language-resources/data/obtaining)
- _Citation:_ Graff, David, Ângelo Mendonça, and Denise DiPersio. French Gigaword Third Edition LDC2011T10. Web Download. Philadelphia: Linguistic Data Consortium, 2011.
},
keywords= {Dataset, nlp, natural language, corpus, annotation, data, text, Annotated, LDC, gigaword, french},
terms= {},
license= {},
superseded= {}
}

Hosted by users:

Send Feedback Start
   0.000005
DB Connect
   0.000523
Lookup hash in DB
   0.000691
Get torrent details
   0.000745
Get torrent details, finished
   0.000703
Get authors
   0.000188
Parse bibtex
   0.001987
Write header
   0.000847
get stars
   0.000461
home tab
   0.000847
render right panel
   0.000040
render ads
   0.000065
fetch current hosters
   0.000968
Done