- Information
- AI Chat
This is a Premium Document. Some documents on Studocu are Premium. Upgrade to Premium to unlock it.
Was this document helpful?
This is a Premium Document. Some documents on Studocu are Premium. Upgrade to Premium to unlock it.
Project 1 Approach
University: University of Maryland Global Campus
Was this document helpful?
This is a preview
Do you want full access? Go Premium and unlock all 4 pages
Access to all documents
Get Unlimited Downloads
Improve your grades
Already Premium?
Here is the recommended approach for project 1.
1) First build the skeleton for project 1 as shown in part 5 of the video series on lexical analysis
using the make file provided. Then run it on the test cases test1.txt – test3.txt that are
provided in Project 1 Test Data and be sure that you understand how it works. Examine the
contents of lexemes.txt, so that you see the lexeme-token pairs that it contains.
2) A good starting point would be item 1 in the requirements, which includes the additional
reserved words of the language. Each of these is a separate token and requires a separate
translation rule. Examine the existing translation rules for the reserved words as an example of
how to proceed. In addition, add the token names for each one to the enumerated type Tokens in
tokens.h. The order in which you add them is unimportant. Rebuild the program with the make
file to ensure that it builds correctly.
Use test4.txt to test this modification. Shown below is the output that should result when
using that test case as input:
$ ./compile < test4.txt
1 // Function with All Reserved Words
2
3 function main returns character;
4 number: real is when 2 < 3, 0 : 1;
5 values: list of integer is (4, 5, 6);
6 begin
7 if number < 6.3 then
8 fold left + (1, 2, 3) endfold;
9 elsif 6 < 7 then
10 fold right + values endfold;
11 else
12 switch a is
13 case 1 => number + 2;
14 case 2 => number * 3;
15 others => number;
16 endswitch;
17 endif;
18 end;
Compiled Successfully
You should receive no lexical errors. At this point, you should also examine lexemes.txt to see
each new reserved word has a unique token number.
3) Adding all the operators as specified by items 2-8 in the requirements would be a good next
step. Examine the existing translation rules for the existing operators as an example of how to
proceed. As before, you must also add the token names for each new operator to the enumerated
type Tokens in tokens.h.
Use test5.txt to test this modification. Shown below is the output that should result when
using that test case as input:
Why is this page out of focus?
This is a Premium document. Become Premium to read the whole document.