Report a bug
		
				If you spot a problem with this page, click here to create a Bugzilla issue.
		
			Improve this page
		
			Quickly fork, edit online, and submit a pull request for this page.
			Requires a signed-in GitHub account. This works well for small changes.
			If you'd like to make larger changes you may want to consider using
			a local clone.
		
	dmd.lexer
Implements the lexical analyzer, which converts source code into lexical tokens.
Specification Lexical
Authors: 
License: 
Source lexer.d
Documentation https://dlang.org/phobos/dmd_lexer.html
- structCompileEnv;
- Values to use for various magic identifiers- uintversionNumber;
- __VERSION__
- const(char)[]date;
- __DATE__
- const(char)[]time;
- __TIME__
- const(char)[]vendor;
- __VENDOR__
- const(char)[]timestamp;
- __TIMESTAMP__
- boolpreviewIn;
- in means [ref] scope const, accepts rvalues
- booltransitionIn;
- -transition=in is active, in parameters are listed
- boolddocOutput;
- collect embedded documentation comments
- boolmasm;
- use MASM inline asm syntax
- IdentifierCharLookupcCharLookupTable;
- C identifier table (set to the lexer by the C parser)
- IdentifierCharLookupdCharLookupTable;
- D identifier table
 
- classLexer;
- Examples:Unittestfprintf(stderr, "Lexer.unittest %d\n", __LINE__); ErrorSink errorSink = new ErrorSinkStderr; void test(T)(string sequence, T expected, bool Ccompile = false) { auto p = cast(const(char)*)sequence.ptr; dchar c2; Lexer lexer = new Lexer(errorSink); assert(expected == lexer.escapeSequence(Loc.initial, p, Ccompile, c2)); assert(p == sequence.ptr + sequence.length); } test(`'`, '\''); test(`"`, '"'); test(`?`, '?'); test(`\`, '\\'); test(`0`, '\0'); test(`a`, '\a'); test(`b`, '\b'); test(`f`, '\f'); test(`n`, '\n'); test(`r`, '\r'); test(`t`, '\t'); test(`v`, '\v'); test(`x00`, 0x00); test(`xff`, 0xff); test(`xFF`, 0xff); test(`xa7`, 0xa7); test(`x3c`, 0x3c); test(`xe2`, 0xe2); test(`1`, '\1'); test(`42`, '\42'); test(`357`, '\357'); test(`u1234`, '\u1234'); test(`uf0e4`, '\uf0e4'); test(`U0001f603`, '\U0001f603'); test(`"`, '"'); test(`<`, '<'); test(`>`, '>'); - IdentifierCharLookupcharLookup;
- Character table for identifiers
- boolCcompile;
- true if compiling ImportC
- ubyteboolsize;
- size of a C Bool, default 1
- ubyteshortsize;
- size of a C short, default 2
- ubyteintsize;
- size of a C int, default 4
- ubytelongsize;
- size of C long, 4 or 8
- ubytelong_longsize;
- size of a C long long, default 8
- ubytelong_doublesize;
- size of C long double, 8 or D real.sizeof
- ubytewchar_tsize;
- size of C wchar_t, 2 or 4
- ErrorSinkeSink;
- send error messages through this interface
- CompileEnvcompileEnv;
- environment
- nothrow this(const(char)*filename, const(char)*base, size_tbegoffset, size_tendoffset, booldoDocComment, boolcommentToken, ErrorSinkerrorSink, const CompileEnv*compileEnv) scope;
- Creates a Lexer for the source code base[begoffset..endoffset+1]. The last character, base[endoffset], must be null (0) or EOF (0x1A).Parameters:const(char)* filenameused for error messages const(char)* basesource code, must be terminated by a null (0) or EOF (0x1A) character size_t begoffsetstarting offset into base[] size_t endoffsetthe last offset to read into base[] bool doDocCommenthandle documentation comments bool commentTokencomments become TOK.comment's ErrorSink errorSinkwhere error messages go, must not be null CompileEnv* compileEnvversion, vendor, date, time, etc. 
- nothrow this(const(char)*filename, const(char)*base, size_tbegoffset, size_tendoffset, booldoDocComment, boolcommentToken, boolwhitespaceToken, ErrorSinkerrorSink, const CompileEnv*compileEnv= null);
- Alternative entry point for DMDLIB, addswhitespaceToken
- nothrow @safe this(ErrorSinkerrorSink) scope;
- Used for unittests for a mock Lexer
- final nothrow voidresetDefineLines(const(char)[]slice);
- Reset lexer to lex #define's
- final nothrow voidnextDefineLine();
- Set up for next #define line. p should be at start of next line.
- final pure nothrow @nogc @property @safe boolempty() const;
- Range interface
- pure nothrow @safe Token*allocateToken();
- Returns:a newly allocated Token.
- final nothrow TOKpeekNext();
- Look ahead at next token's value.
- final nothrow TOKpeekNext2();
- Look 2 tokens ahead at value.
- final nothrow voidscan(Token*t);
- Turn next token in buffer into a token.Parameters:Token* tthe token to set the resulting Token to 
- final nothrow Token*peekPastParen(Token*tk);
- tk is on the opening (. Look ahead and return token that is past the closing ).
- final nothrow TOKhexStringConstant(Token*t);
- Lex hex strings: x"0A ae 34FE BD"
- nothrow boolparseSpecialTokenSequence();
- Parse special token sequence:Returns:true if the special token sequence was handled
- final nothrow voidpoundLine(ref Tokentok, boollinemarker);
- Parse line/file preprocessor directive: #line linnum [filespec] Allow __LINE__ for linnum, and __FILE__ for filespec. Accept linemarker format:linnum [filespec] {flags}There can be zero or more flags, which are one of the digits 1..4, and must be in ascending order. The flags are ignored.Parameters:Token toktoken we're on, which is linnum of linemarker bool linemarkertrue if line marker format and lexer is on linnum References linemarker https://gcc.gnu.org/onlinedocs/gcc-11.1.0/cpp/Preprocessor-Output.html 
- final nothrow voidskipToNextLine(OutBuffer*defines= null);
- Scan forward to start of next line.Parameters:OutBuffer* definessend characters to defines
- static pure nothrow const(char)*combineComments(const(char)[]c1, const(char)[]c2, boolnewParagraph);
- Combine two document comments into one, separated by an extra newline if newParagraph is true.
- nothrow voidprintRestOfTokens();
- Print the tokens from the current token to the end, while not advancing the parser forward. Useful for debugging.
 
Copyright © 1999-2025 by the D Language Foundation | Page generated by
Ddoc on Mon Mar 31 10:27:39 2025