Fai clic su di un'immagine per andare a Google Ricerca Libri.
Sto caricando le informazioni... Register, Genre, and Styledi Douglas Biber
Nessuno Sto caricando le informazioni...
Iscriviti per consentire a LibraryThing di scoprire se ti piacerà questo libro. Attualmente non vi sono conversazioni su questo libro. Nessuna recensione
Appartiene alle Serie
This book describes the most important kinds of texts in English and introduces the methodological techniques used to analyse them. Three analytical approaches are introduced and compared, describing a wide range of texts from the perspectives of register, genre and style. The primary focus of the book is on the analysis of registers. Part 1 introduces an analytical framework for studying registers, genre conventions, and styles. Part 2 provides detailed descriptions of particular text varieties in English, including spoken interpersonal varieties (conversation, university office hours, service encounters), written varieties (newspapers, academic prose, fiction), and emerging electronic varieties (e-mail, internet forums, text messages). Finally, Part 3 introduces advanced analytical approaches using corpora, and discusses theoretical concerns, such as the place of register studies in linguistics, and practical applications of register analysis. Each chapter ends with three types of activities: reflection and review activities, analysis activities, and larger project ideas. Non sono state trovate descrizioni di biblioteche |
Discussioni correntiNessunoCopertine popolari
Google Books — Sto caricando le informazioni... GeneriSistema Decimale Melvil (DDC)306.44Social sciences Social Sciences; Sociology and anthropology Culture and Institutions Specific aspects of culture LanguageClassificazione LCVotoMedia:
Sei tu?Diventa un autore di LibraryThing. |
One of the most common problems in text-attribution studies I have been reviewing is that critics such as Furbank and Owens choose isolated phrases based on patterns they identify matching between texts and then conclude that this correlation indicates a shared author. Bias is involved in choosing these phrases, and so the outcome is a confirmation bias, as critics confirm the attribution they wanted to re-establish or add, instead of arriving at the objective truth. This mistake is partly explained and partly proven as a mistake by the manner in which this book explains “register features”, “markers”, “genre markers”, and related narrower concepts. The authors summarize that different genres have different register features such as the quantity of “passive voice verbs” they tend to utilize. Then they point to “register markers” or phrases or expressions that are shared by some “linguistic constructions” such as “baseball game broadcasters”, who, for example, tend to overuse the phrase, “sliding into second”. The problem is that Furbank, Owens and other mis-attributors take the likelihood of these phrase repetitions and conclude that it is possible to find phrases an author tends to repeat, and that they are indicative of a signature; however, some of these phrases are common to the entire register or genre (i.e. most novels, or most mysteries), so stretching this theory from the specified application of these phrases indicating a register to them indicating an authorial signature is likely to lead to false conclusions as works by different authors but within the same register can be mis-identified as sharing an author. Douglas Biber and Susan Conrad specify in the next section’s title, “The Need for Quantitative Analysis” for a reason: taking out phrases without measuring their significance with detached numeric objectivism leads to bias. However, critics miss this mathematic demand, or twist the basic math needed into formulas that can be manipulated to favor the intended attribution conclusion. Running a mathematic analysis on texts can be simple if basic units are measured, or it can be infinitely difficult if an attempt is made to compare long phrases, as language, in practice, crosses the genre, register and other lines theory can assume are ironclad. Biber and Conrad then detail “The Need for a Representative Sample”: but while they are referring to having at least “100-word text excerpts”, this request tends to be taken by reviewers to be referring to astronomically large samples with millions of words. The confusion is reinforced by the vagueness of these sections. For example, on the point around the text-size mentioned above of “how many different texts are enough?” the authors report: “Unfortunately, there are no clear-cut answers to these questions. Some linguistic features are extremely common and pervasive in texts, like nouns, verbs, or pronouns. For those features, you can compute reliable counts from a few texts that are relatively short (i.e., even 100-word text excerpts). Other features, like relative clauses, are less common: to reliably capture the distribution of those features in a register you would need longer texts and a larger sample of texts…” (54-9). I read several essays and books related to this subject while designing my own computational linguistic analysis, and this is one of the clearer explanations of this field. After testing various features, I learned that the “common” features such as “nouns” are much more reliable in showing a linguistic signature: and this is why they are more accurate in even small word samples. In contrast, the “relative clause” is not only extremely difficult to calculate and analyze even with the best computer programs, but it is also less revealing of authorial style. Yet, Furbank, Owens and most other biased linguistic researchers focus on these improbable to calculate complex features such as relative clauses, and then use the lack of conclusive findings to prove there is no such thing as authorial signatures.
While the summary makes this book appear to be unnecessarily complicating, it is actually one of the more practical books on registers and authorial style in the market. It offers methods and strategies for measuring linguistic patterns for those who have the patience to apply them. Despite these types of linguistic explanations regarding how accurate authorial attributions might be made, these lessons have not yet been combined into the 31-test comparative attribution method I am proposing because it seems there are too many ghostwriters and plagiarists in academia for them to be willing to publicize a method that might prevent them from escaping undetected with such academic misdeeds. There is seemingly nothing wrong with the facts Biber and Conrad present; the problem is in the refusal to properly apply their recommendations.