Biber

NAME

Biber − main module for biber, a bibtex replacement for users of biblatex

SYNOPSIS

use Biber;
my $biber = Biber−>new();
$biber−>parse_ctrlfile("example.bcf");
$biber−>prepare;

METHODS

new
Initialize the Biber object, optionally passing named options as arguments.

display_end
Output summary of warnings/errors/misc before exit

biber_tempdir
Returns a File::Temp directory object for use in various things

biber_tempdir_name
Returns the directory name of the File::Temp directory object

sections
my $sections= $biber−>sections
Returns a Biber::Sections object describing the bibliography sections

add_sections
Adds a Biber::Sections object. Used externally from, e.g. biber

datalists
my $datalists = $biber−>datalists
Returns a Biber::DataLists object describing the bibliography sorting lists

langtags
Returns a Biber::LangTags object containing a parser for BCP47 tags

set_output_obj
Sets the object used to output final results
Must be a subclass of Biber::Output::base

get_preamble
Returns the current preamble as an array ref

get_output_obj
Returns the object used to output final results

set_current_section
Sets the current section number that we are working on to a section number

get_current_section
Gets the current section number that we are working on

tool_mode_setup
Fakes parts of the control file for tool mode

parse_ctrlfile
This method reads the control file
generated by biblatex to work out the various biblatex options.
See Constants.pm for defaults and example of the data structure being built here.

process_setup
Place to put misc pre−processing things needed later

process_setup_tool
Place to put misc pre−processing things needed later for tool mode

resolve_alias_refs
Resolve aliases in xref/crossref/xdata which take keys as values to their real keys
We use set_datafield as we are overriding the alias in the datasource

process_citekey_aliases
Remove citekey aliases from citekeys as they don't point to real
entries.

instantiate_dynamic
This instantiates any dynamic entries so that they are available
for processing later on. This has to be done before most all other
processing so that when we call $section−>bibentry($key), as we
do many times in the code, we don't die because there is a key but
no Entry object.

resolve_xdata
Resolve xdata

cite_setmembers
Promotes set member to cited status

preprocess_sets
$biber−>preprocess_sets
This records the set information for use later

process_interentry
$biber−>process_interentry
This does several things:
1. Ensures proper inheritance of data from cross−references.
2. Ensures that crossrefs/xrefs that are directly cited or cross−referenced
at least mincrossrefs/minxrefs times are included in the bibliography.

validate_datamodel
Validate bib data according to a datamodel
Note that we are validating the internal Biber::Entries
after they have been created from the datasources so this is
datasource neutral, as it should be. It is here to enforce
adherence to what biblatex expects.

process_namedis
Generate name strings and disambiguation schema. Has to be in the context
of a data list (reference context) because uniquenametemplate can be specified
per−list/context

postprocess_sets
Adds required per−entry options etc. to sets

process_entries_static
Processing of entries which is not list−specific and which can therefore
insert data directly into entries

process_entries_pre
Main processing operations, to generate metadata and entry information
This method is automatically called by C<prepare>.
Runs prior to uniqueness processing

process_entries_post
More processing operations, to generate things which require uniqueness
information like namehash
Runs after uniqueness processing

process_entries_final
Final processing operations which depend on all previous processing

process_uniqueprimaryauthor
Track seen primary author base names for generation of uniqueprimaryauthor

process_workuniqueness
Track seen work combination for generation of singletitle, uniquetitle, uniquebaretitle and
uniquework

process_extradate
Track labelname/date parts combination for generation of extradate

process_extraname
Track labelname only for generation of extraname

process_extratitle
Track labelname/labeltitle combination for generation of extratitle

process_extratitleyear
Track labeltitle/labelyear combination for generation of extratitleyear

process_sets
Postprocess set entries
Checks for common set errors and enforces "dataonly" options for set members.
It's not necessary to set skipbib, skipbiblist in the OPTIONS field for
the set members as these are automatically set by biblatex due to the \inset

process_nocite
Generate nocite information

process_labelname
Generate labelname information.

process_labeldate
Generate labeldate information, including times

process_labeltitle
Generate labeltitle
Note that this is not conditionalised on the biblatex "labeltitle"
as labeltitle should always be output since all standard styles need it.
Only extratitle is conditionalised on the biblatex "labeltitle" option.

process_fullhash
Generate fullhash

process_namehash
Generate namehash

process_pername_hashes
Generate per_name_hashes

process_visible_names
Generate the visible name information.
This is used in various places and it is useful to have it generated in one place.

process_labelalpha
Generate the labelalpha and also the variant for sorting

process_extraalpha
Generate the extraalpha information

process_presort
Put presort fields for an entry into the main Biber bltx state
so that it is all available in the same place since this can be
set per−type and globally too.

process_lists
Process a bibliography list

check_list_filter
Run an entry through a list filter. Returns a boolean.

generate_sortdataschema
Generate sort data schema for Sort::Key from sort spec like this:
spec => [
[undef, { presort => {} }],
[{ final => 1 }, { sortkey => {} }],
[
{'sort_direction' => 'descending'},
{ sortname => {} },
{ author => {} },
{ editor => {} },
{ translator => {} },
{ sorttitle => {} },
{ title => {} },
],
[undef, { sortyear => {} }, { year => {} }],
[undef, { sorttitle => {} }, { title => {} }],
[undef, { volume => {} }, { "0000" => {} }],
],

generate_sortinfo
Generate information for sorting

uniqueness
Generate the uniqueness information needed when creating .bbl

create_uniquename_info
Gather the uniquename information as we look through the names
What is happening in here is the following: We are registering the
number of occurrences of each name, name+init and fullname within a
specific context. For example, the context is "global" with uniquename
< mininit and "name list" for uniquename=mininit or minfull. The keys
we store to count this are the most specific information for the
context, so, for uniquename < mininit, this is the full name and for
uniquename=mininit or minfull, this is the complete list of full names.
These keys have values in a hash which are ignored. They serve only to
accumulate repeated occurrences with the context and we don't care
about this and so the values are a useful sinkhole for such repetition.
For example, if we find in the global context a base name "Smith" in two different entries
under the same form "Alan Smith", the data structure will look like:
{Smith}−>{global}−>{Alan Smith} = 2
We don't care about the value as this means that there are 2 "Alan Smith"s in the global
context which need disambiguating identically anyway. So, we just count the keys for the
base name "Smith" in the global context to see how ambiguous the base name itself is. This
would be "1" and so "Alan Smith" would get uniquename=false because it's unambiguous as just
"Smith".
The same goes for "minimal" list context disambiguation for uniquename=mininit or minfull.
For example, if we had the base name "Smith" to disambiguate in two entries with labelname
"John Smith and Alan Jones", the data structure would look like:
{Smith}−>{Smith+Jones}−>{John Smith+Alan Jones} = 2
Again, counting the keys of the context for the base name gives us "1" which means we
have uniquename=false for "John Smith" in both entries because it's the same list. This also
works for repeated names in the same list "John Smith and Bert Smith". Disambiguating
"Smith" in this:
{Smith}−>{Smith+Smith}−>{John Smith+Bert Smith} = 2
So both "John Smith" and "Bert Smith" in this entry get
uniquename=false (of course, as long as there are no other "X Smith and
Y Smith" entries where X != "John" or Y != "Bert").
The values from biblatex.sty:
false = 0
init = 1
true = 2
full = 2
allinit = 3
allfull = 4
mininit = 5
minfull = 6

generate_uniquename
Generate the per−name uniquename values using the information
harvested by create_uniquename_info()

create_uniquelist_info
Gather the uniquelist information as we look through the names

generate_uniquelist
Generate the per−namelist uniquelist values using the information
harvested by create_uniquelist_info()

generate_contextdata
Generate information for data which may changes per datalist

generate_singletitle
Generate the singletitle field, if requested. The information for generating
this is gathered in process_workuniqueness()

generate_uniquetitle
Generate the uniquetitle field, if requested. The information for generating
this is gathered in process_workuniqueness()

generate_uniquebaretitle
Generate the uniquebaretitle field, if requested. The information for generating
this is gathered in process_workuniqueness()

generate_uniquework
Generate the uniquework field, if requested. The information for generating
this is gathered in process_workuniqueness()

generate_uniquepa
Generate the uniqueprimaryauthor field, if requested. The information for generating
this is gathered in create_uniquename_info()

sort_list
Sort a list using information in entries according to a certain sorting template.
Use a flag to skip info messages on first pass

preprocess_options
Preprocessing for options. Used primarily to perform process−intensive
operations which can be done once instead of inside dense loops later.

prepare
Do the main work.
Process and sort all entries before writing the output.

prepare_tool
Do the main work for tool mode

fetch_data
Fetch citekey and dependents data from section datasources
Expects to find datasource packages named:
Biber::Input::<type>::<datatype>
and one defined subroutine called:
Biber::Input::<type>::<datatype>::extract_entries
which takes args:
1: Biber object
2: Datasource name
3: Reference to an array of cite keys to look for
and returns an array of the cite keys it did not find in the datasource

get_dependents
Get dependents of the entries for a given list of citekeys. Is called recursively
until there are no more dependents to look for.

remove_undef_dependent
Remove undefined dependent keys from an entry using a map of
dependent keys to entries

_parse_sort
Convenience sub to parse a .bcf sorting section and return nice
sorting object

_filedump and _stringdump
Dump the biber object with Data::Dump for debugging

AUTHORS

Philip Kime "<philip at kime.org.uk>"

BUGS

Please report any bugs or feature requests on our Github tracker at <https://github.com/plk/biber/issues>.

COPYRIGHT & LICENSE

Copyright 2009−2012 Francois Charette and Philip Kime, all rights reserved. Copyright 2012−2019 Philip Kime, all rights reserved.

This module is free software. You can redistribute it and/or modify it under the terms of the Artistic License 2.0.

This program is distributed in the hope that it will be useful, but without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose.