Skip to content
This repository was archived by the owner on Dec 15, 2022. It is now read-only.

Tokenize punctuation #70

Merged
merged 4 commits into from
Jan 20, 2018
Merged

Tokenize punctuation #70

merged 4 commits into from
Jan 20, 2018

Conversation

chbk
Copy link
Contributor

@chbk chbk commented Dec 7, 2017

  • Scoped () as punctuation.definition.section|parameters.bracket.round.begin|end.sql
  • Scoped , as punctuation.separator(.parameters).comma.sql
  • Scoped . as punctuation.separator.period.sql
  • Scoped ; as punctuation.terminator.statement.semicolon.sql
  • Rewrote grammar for storage types, added nchar and bit varying, removed var char (with a space, couldn't find any documentation on this type)
  • Added specs

Fixes #68

@chbk
Copy link
Contributor Author

chbk commented Dec 8, 2017

Don't pull yet, noticed other unscoped brackets:

number(2)

@chbk
Copy link
Contributor Author

chbk commented Dec 11, 2017

@50Wliu This is now ready for review.

Copy link
Contributor

@winstliu winstliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, just some spec comments.

@@ -167,3 +489,39 @@ describe "SQL grammar", ->
expect(tokens[3]).toEqual value: ' WITH ', scopes: ['source.sql', 'comment.block.sql']
expect(tokens[4]).toEqual value: '*/', scopes: ['source.sql', 'comment.block.sql', 'punctuation.definition.comment.sql']
expect(tokens[6]).toEqual value: 'AND', scopes: ['source.sql', 'keyword.other.DML.sql']

it 'tokenizes ()', ->
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tokenizes parentheses

expect(tokens[13]).toEqual value: ' employees', scopes: ['source.sql']
expect(tokens[14]).toEqual value: ')', scopes: ['source.sql', 'punctuation.definition.section.bracket.round.end.sql']

it 'tokenizes ,', ->
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

commas

expect(tokens[1]).toEqual value: ',', scopes: ['source.sql', 'punctuation.separator.comma.sql']
expect(tokens[2]).toEqual value: ' year', scopes: ['source.sql']

it 'tokenizes .', ->
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

periods

expect(tokens[1]).toEqual value: '.', scopes: ['source.sql', 'punctuation.separator.period.sql']
expect(tokens[2]).toEqual value: 'table', scopes: ['source.sql', 'constant.other.table-name.sql']

it 'tokenizes ;', ->
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

semicolons

@@ -167,3 +489,39 @@ describe "SQL grammar", ->
expect(tokens[3]).toEqual value: ' WITH ', scopes: ['source.sql', 'comment.block.sql']
expect(tokens[4]).toEqual value: '*/', scopes: ['source.sql', 'comment.block.sql', 'punctuation.definition.comment.sql']
expect(tokens[6]).toEqual value: 'AND', scopes: ['source.sql', 'keyword.other.DML.sql']

it 'tokenizes ()', ->
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this could be under a describe 'punctuation', -> block as well.

{tokens} = grammar.tokenizeLine('timetz (2)')
expect(tokens[0]).toEqual value: 'timetz', scopes: ['source.sql', 'storage.type.sql']
expect(tokens[2]).toEqual value: '2', scopes: ['source.sql', 'constant.numeric.sql']
it 'tokenizes column types', ->
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a rather massive test. We don't have to test for everything, just smoke-test 1-2 from each category (and consider separating into different tests).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants