Choosing the Right Path for Your TA-002-P Exam Preparation
Welcome to PassExamHub's comprehensive study guide for the HashiCorp Certified: Terraform Associate exam. Our TA-002-P dumps is designed to equip you with the knowledge and resources you need to confidently prepare for and succeed in the TA-002-P certification exam.
What Our HashiCorp TA-002-P Study Material Offers
PassExamHub's TA-002-P dumps PDF is carefully crafted to provide you with a comprehensive and effective learning experience. Our study material includes:
In-depth Content: Our study guide covers all the key concepts, topics, and skills you need to master for the TA-002-P exam. Each topic is explained in a clear and concise manner, making it easy to understand even the most complex concepts.
Online Test Engine: Test your knowledge and build your confidence with a wide range of practice questions that simulate the actual exam format. Our test engine cover every exam objective and provide detailed explanations for both correct and incorrect answers.
Exam Strategies: Get valuable insights into exam-taking strategies, time management, and how to approach different types of questions.
Real-world Scenarios: Gain practical insights into applying your knowledge in real-world scenarios, ensuring you're well-prepared to tackle challenges in your professional career.
Why Choose PassExamHub?
Expertise: Our TA-002-P exam questions answers are developed by experienced HashiCorp certified professionals who have a deep understanding of the exam objectives and industry best practices.
Comprehensive Coverage: We leave no stone unturned in covering every topic and skill that could appear on the TA-002-P exam, ensuring you're fully prepared.
Engaging Learning: Our content is presented in a user-friendly and engaging format, making your study sessions enjoyable and effective.
Proven Success: Countless students have used our study materials to achieve their TA-002-P certifications and advance their careers.
Start Your Journey Today!
Embark on your journey to HashiCorp Certified: Terraform Associate success with PassExamHub. Our study material is your trusted companion in preparing for the TA-002-P exam and unlocking exciting career opportunities.
Related Exams
HashiCorp TA-002-P Sample Question Answers
Question # 1
In the following code snippet, the block type is identified by which string?
A. "aws_instance" B. resource C. "db" D. instance_type
Answer: B
Question # 2
Which statements best describes what the local variable assignment is doing in thefollowing code snippet:
A. Create a distinct list of route table name objects B. Create a map of route table names to subnet names C. Create a map of route table names from a list of subnet names D. Create a list of route table names eliminating duplicates
Answer: D
Question # 3
While Terraform is generally written using the HashiCorp Configuration Language (HCL),what other syntax can Terraform are expressed in?
A. JSON B. YAML C. TypeScript D. XML
Answer: A
Explanation:
The constructs in the Terraform language can also be expressed in JSON syntax, which is
harder for humans to read and edit but easier to generate and parse programmatically.
Question # 4
Complete the following sentence:The terraform state command can be used to ____
A. modify state B. view state C. refresh state D. There is no such command
Which Terraform command will check and report errors within modules, attribute names,and value types to make sure they are syntactically valid and internally consistent?
A. terraform validate B. terraform format C. terraform fmt D. terraform show
Answer: A
Explanation:
The terraform validate command validates the configuration files in a directory, referring
only to the configuration and not accessing any remote services such as remote state,
provider APIs, etc.
Validate runs checks that verify whether a configuration is syntactically valid and internally
consistent, regardless of any provided variables or existing state. It is thus primarily useful
for general verification of reusable modules, including the correctness of attribute names
and value types.
It is safe to run this command automatically, for example as a post-save check in a text
editor or as a test step for a re-usable module in a CI system.
Question # 6
A user creates three workspaces from the command line - prod, dev, and test. Which of thefollowing commands will the user run to switch to the dev workspace?
A. terraform workspace dev B. terraform workspace select dev C. terraform workspace -switch dev D. terraform workspace switch dev
Answer: B
Explanation: Explanation
The terraform workspace select command is used to choose a different workspace to use
Given the below resource configuration -resource "aws_instance" "web" { # ... count = 4 }What does the terraform resource address aws_instance.web refer to?
A. It refers to all 4 web instances , together , for further individual segregation , indexing isrequired , with a 0 based index. B. It refers to the last web EC2 instance , as by default , if no index is provided , the last /N-1 index is used. C. It refers to the first web EC2 instance out of the 4 ,as by default , if no index is provided ,the first / 0th index is used. D. The above will result in a syntax error , as it is not syntactically correct . Resourcesdefined using count , can only be referenced using indexes.
Answer: A
Explanation:
A Resource Address is a string that references a specific resource in a larger infrastructure.
An address is made up of two parts:
[module path][resource spec]
Module path:
A module path addresses a module within the tree of modules. It takes the form:
module.A.module.B.module.C...
Multiple modules in a path indicate nesting. If a module path is specified without a resource
spec, the address applies to every resource within the module. If the module path is
omitted, this addresses the root module.
Given a Terraform config that includes:
resource "aws_instance" "web" {
# ...
count = 4
}
An address like this:
aws_instance.web[3]
Refers to only the last instance in the config, and an address like this:
A user has created a module called "my_test_module" and committed it to GitHub. Overtime, several commits have been made with updates to the module, each tagged in GitHubwith an incremental version number. Which of the following lines would be required in amodule configuration block in terraform to select tagged version v1.0.4?
A. source = "git::https://example.com/my_test_module.git@tag=v1.0.4" B. source = "git::https://example.com/my_test_module.git&ref=v1.0.4" C. source = "git::https://example.com/my_test_module.git#tag=v1.0.4" D. source = "git::https://example.com/my_test_module.git?ref=v1.0.4"
What are some of the features of Terraform state? (select three)
A. inspection of cloud resources B. determining the correct order to destroy resources C. mapping configuration to real-world resources D. increased performance
Answer: C,D
Question # 14
By default, where does Terraform store its state file?
A. Amazon S3 bucket B. shared directory C. remotely using Terraform Cloud D. current working directory
Answer: D
Explanation: Explanation
By default, the state file is stored in a local file named "terraform.tfstate", but it can also be
stored remotely, which works better in a team environment.
Question # 15
What is the result of the following terraform function call?
A "backend" in Terraform determines how state is loaded and how an operation such asapply is executed. Which of the following is not a supported backend type?
A. Terraform enterprise B. Consul C. Github D. S3 E. Artifactory
When writing Terraform code, HashiCorp recommends that you use how many spacesbetween each nesting level?
A. 0 B. 1 C. 2 D. 4
Answer: C
Explanation: The Terraform parser allows you some flexibility in how you lay out the elements in your configuration files, but the Terraform language also has some idiomatic style conventions which we recommend users always follow for consistency between files and modules written by different teams. Automatic source code formatting tools may apply these conventions automatically. Indent two spaces for each nesting level. When multiple arguments with single-line values appear on consecutive lines at the same nesting level, align their equals signs: ami = "abc123" instance_type = "t2.micro" When both arguments and blocks appear together inside a block body, place all of the arguments together at the top and then place nested blocks below them. Use one blank line to separate the arguments from the blocks. Use empty lines to separate logical groups of arguments within a block. For blocks that contain both arguments and "meta-arguments" (as defined by the Terraform language semantics), list meta-arguments first and separate them from other arguments with one blank line. Place meta-argument blocks last and separate them from other blocks with one blank line. resource "aws_instance" "example" { count = 2 # meta-argument first ami = "abc123" instance_type = "t2.micro" network_interface { # ... } lifecycle { # meta-argument block last create_before_destroy = true } } Top-level blocks should always be separated from one another by one blank line. Nested blocks should also be separated by blank lines, except when grouping together related blocks of the same type (like multiple provisioner blocks in a resource). Avoid separating multiple blocks of the same type with other blocks of a different type, unless the block types are defined by semantics to form a family. (For example: root_block_device, ebs_block_device and ephemeral_block_device on aws_instance form a family of block types describing AWS block devices, and can therefore be grouped together and mixed.)
Question # 19
What is the best and easiest way for Terraform to read and write secrets from HashiCorpVault?
A. Vault provider B. API access using the AppRole auth method C. integration with a tool like Jenkins D. CLI access from the same machine running Terraform
Answer: A
Question # 20
Which one is the right way to import a local module names consul?
A. module "consul" { source = "consul"} B. module "consul" { source = "./consul"} C. module "consul" { source = "../consul"} D. module "consul" { source = "module/consul"}
Answer: B,C
Explanation:
A local path must begin with either ./ or ../ to indicate that a local path is intended, to
distinguish from a module registry address.
module "consul" {
source = "./consul"
}
Question # 21
A user runs terraform init on their RHEL based server and per the output, two providerplugins are downloaded: $ terraform initInitializing the backend...Initializing provider plugins...- Checking for available provider plugins...- Downloading plugin for provider "aws" (hashicorp/aws) 2.44.0...- Downloading plugin for provider "random" (hashicorp/random) 2.2.1...:Terraform has been successfully initialized! Where are these plugins downloaded to?
A. The .terraform.plugins directory in the directory terraform init was executed in. B. The .terraform/plugins directory in the directory terraform init was executed in. C. /etc/terraform/plugins D. The .terraform.d directory in the directory terraform init was executed in.
Answer: B
Question # 22
Terraform will sync all resources in state by default for every plan and apply, hence forlarger infrastructures this can slow down terraform plan and terraform apply commands?
A. False B. True
Answer: B
Explanation:
For small infrastructures, Terraform can query your providers and sync the latest attributes
from all your resources. This is the default behavior of Terraform: for every plan and apply,
Terraform will sync all resources in your state.
For larger infrastructures, querying every resource is too slow. Many cloud providers do not
provide APIs to query multiple resources at once, and the round trip time for each resource
is hundreds of milliseconds. On top of this, cloud providers almost always have API rate
limiting so Terraform can only request a certain number of resources in a period of time.
Larger users of Terraform make heavy use of the -refresh=false flag as well as the -target
flag in order to work around this. In these scenarios, the cached state is treated as the
During a terraform plan, a resource is successfully created but eventually fails duringprovisioning. What happens to the resource?
A. Terraform attempts to provision the resource up to three times before exiting with anerror B. the terraform plan is rolled back and all provisioned resources are removed C. it is automatically deleted D. the resource is marked as tainted
Answer: D
Explanation: Explanation
If a resource successfully creates but fails during provisioning, Terraform will error and
mark the resource as "tainted". A resource that is tainted has been physically created, but
can't be considered safe to use since provisioning failed. Terraform also does not
automatically roll back and destroy the resource during the apply when the failure happens,
because that would go against the execution plan: the execution plan would've said a
resource will be created, but does not say it will ever be deleted.
Question # 24
You have created a custom variable definition file my_vars.tfvars. How will you use it forprovisioning infrastructure?
A. terraform apply -var-state-file ="my_vars.tfvars" B. terraform apply var-file="my_vars.tfvars" C. terraform plan -var-file="my_vars.tfvar" D. terraform apply -var-file="my_vars.tfvars"
Answer: D
Explanation:
To set lots of variables, it is more convenient to specify their values in a variable definitions
file (with a filename ending in either .tfvars or .tfvars.json) and then specify that file on the
Your organization has moved to AWS and has manually deployed infrastructure using theconsole. Recently, a decision has been made to standardize on Terraform for alldeployments moving forward.
A. Submit a ticket to AWS and ask them to export the state of all existing resources anduse terraform import to import them into the state file. B. Delete the existing resources and recreate them using new a Terraform configuration soTerraform can manage them moving forward. C. Resources that are manually deployed in the AWS console cannot be imported byTerraform. D. Using terraform import, import the existing infrastructure into your Terraform state.
Answer: D
Explanation: Terraform is able to import existing infrastructure. This allows us take resources we've created by some other means (i.e. via console) and bring it under Terraform management. This is a great way to slowly transition infrastructure to Terraform. The terraform import command is used to import existing infrastructure. To import a resource, first write a resource block for it in our configuration, establishing the name by which it will be known to Terraform. Example: resource "aws_instance" "import_example" { # ...instance configuration... } Now terraform import can be run to attach an existing instance to this resource configuration. $ terraform import aws_instance.import_example i-03efafa258104165f aws_instance.import_example: Importing from ID "i-03efafa258104165f"... aws_instance.import_example: Import complete! Imported aws_instance (ID: i-03efafa258104165f) aws_instance.import_example: Refreshing state... (ID: i-03efafa258104165f) Import successful! The resources that were imported are shown above. These resources are now in your Terraform state and will henceforth be managed by Terraform. This command locates the AWS instance with ID i-03efafa258104165f (which has been created outside Terraform) and attaches its existing settings, as described by the EC2 API, to the name aws_instance.import_example in the Terraform state.
Question # 26
Which of the following is considered a Terraform plugin?
A. Terraform language B. Terraform tooling C. Terraform logic D. Terraform provider
Answer: D
Explanation:
Terraform is built on a plugin-based architecture. All providers and provisioners that are
used in Terraform configurations are plugins, even the core types such as AWS and
Heroku. Users of Terraform are able to write new plugins in order to support new
In the example below, where is the value of the DNS record's IP address originating from?1. resource "aws_route53_record" "www"2. {3. zone_id = aws_route53_zone.primary.zone_id4. name = "www.example.com"5. type = "A"6. ttl = "300"7. records = [module.web_server.instance_ip_address]8. }
A. The regular expression named module.web_server B. The output of a module named web_server C. By querying the AWS EC2 API to retrieve the IP address D. Value of the web_server parameter from the variables.tf file
Answer: B
Explanation:
In a parent module, outputs of child modules are available in expressions as
module.<MODULE NAME>.<OUTPUT NAME>.
For example, if a child module named web_server declared an output named
instance_ip_address, you could access that value as
module.web_server.instance_ip_address.
Question # 28
True or False? When using the Terraform provider for Vault, the tight integration betweenthese HashiCorp tools provides the ability to mask secrets in the terraform plan and statefiles.
A. False B. True
Answer: A
Explanation: Explanation
Currently, Terraform has no mechanism to redact or protect secrets that are returned via
data sources, so secrets read via this provider will be persisted into the Terraform state,
into any plan files, and in some cases in the console output produced while planning and
applying. These artifacts must, therefore, all be protected accordingly.
Question # 29
In the example below, the depends_on argument creates what type of dependency?
A. implicit dependency B. internal dependency C. explicit dependency D. non-dependency resource
Answer: C
Question # 30
When using constraint expressions to signify a version of a provider, which of the followingare valid provider versions that satisfy the expression found in the following code snippet:(select two)1. terraform2. {3. required_providers4. {5. aws = "~> 1.2.0"6. }7. }
A. 1.3.1 B. 1.2.3 C. 1.2.9 D. 1.3.0
Answer: B,C
Explanation:
As your Terraform usage becomes more advanced, there are some cases where you may
need to modify the Terraform state. Rather than modify the state directly, the terraform
state commands can be used in many cases instead. This command is a nested
subcommand, meaning that it has further subcommands.
You have written a terraform IaC script which was working till yesterday , but is giving somevague error from today , which you are unable to understand . You want more detailed logsthat could potentially help you troubleshoot the issue , and understand the root cause.What can you do to enable this setting? Please note , you are using terraform OSS.
A. Terraform OSS can push all its logs to a syslog endpoint. As such, you have to set upthe syslog sink, and enable TF_LOG_PATH env variable to the syslog endpoint and alllogs will automatically start streaming. B. Detailed logs are not available in terraform OSS, except the crash message. You needto upgrade to terraform enterprise for this point. C. Enable the TF_LOG_PATH to the log sink file location, and logging output willautomatically be stored there. D. Enable TF_LOG to the log level DEBUG, and then set TF_LOG_PATH to the log sinkfile location. Terraform debug logs will be dumped to the sink path, even in terraform OSS.
Answer: D
Explanation:
Terraform has detailed logs which can be enabled by setting the TF_LOG environment
variable to any value. This will cause detailed logs to appear on stderr.
You can set TF_LOG to one of the log levels TRACE, DEBUG, INFO, WARN or ERROR to
change the verbosity of the logs. TRACE is the most verbose and it is the default if
TF_LOG is set to something other than a log level name.
To persist logged output you can set TF_LOG_PATH in order to force the log to always be
appended to a specific file when logging is enabled. Note that even when TF_LOG_PATH
is set, TF_LOG must be set in order for any logging to be enabled.
Question # 32
What Terraform command can be used to inspect the current state file?
A. terraform inspect B. terraform read C. terraform show D. terraform state
Answer: C
Question # 33
State is a requirement for Terraform to function
A. True B. False
Answer: A
Explanation: State is a necessary requirement for Terraform to function. It is often asked if it is possible for Terraform to work without state, or for Terraform to not use state and just inspect cloud resources on every run. Purpose of Terraform State State is a necessary requirement for Terraform to function. It is often asked if it is possible for Terraform to work without state, or for Terraform to not use state and just inspect cloud resources on every run. This page will help explain why Terraform state is required. As you'll see from the reasons below, state is required. And in the scenarios where Terraform may be able to get away without state, doing so would require shifting massive amounts of complexity from one place (state) to another place (the replacement concept). 1. Mapping to the Real World Terraform requires some sort of database to map Terraform config to the real world. When you have a resource resource "aws_instance" "foo" in your configuration, Terraform uses this map to know that instance i- abcd1234 is represented by that resource. For some providers like AWS, Terraform could theoretically use something like AWS tags. Early prototypes of Terraform actually had no state files and used this method. However, we quickly ran into problems. The first major issue was a simple one: not all resources support tags, and not all cloud providers support tags. Therefore, for mapping configuration to resources in the real world, Terraform uses its own state structure. 2. Metadata Alongside the mappings between resources and remote objects, Terraform must also track metadata such as resource dependencies. Terraform typically uses the configuration to determine dependency order. However, when you delete a resource from a Terraform configuration, Terraform must know how to delete that resource. Terraform can see that a mapping exists for a resource not in your configuration and plan to destroy. However, since the configuration no longer exists, the order cannot be determined from the configuration alone. To ensure correct operation, Terraform retains a copy of the most recent set of dependencies within the state. Now Terraform can still determine the correct order for destruction from the state when you delete one or more items from the configuration. One way to avoid this would be for Terraform to know a required ordering between resource types. For example, Terraform could know that servers must be deleted before the subnets they are a part of. The complexity for this approach quickly explodes, however: in addition to Terraform having to understand the ordering semantics of every resource for every cloud, Terraform must also understand the ordering across providers. Terraform also stores other metadata for similar reasons, such as a pointer to the provider configuration that was most recently used with the resource in situations where multiple aliased providers are present. 3. Performance In addition to basic mapping, Terraform stores a cache of the attribute values for all resources in the state. This is the most optional feature of Terraform state and is done only as a performance improvement. When running a terraform plan, Terraform must know the current state of resources in order to effectively determine the changes that it needs to make to reach your desired configuration. For small infrastructures, Terraform can query your providers and sync the latest attributes from all your resources. This is the default behavior of Terraform: for every plan and apply, Terraform will sync all resources in your state. For larger infrastructures, querying every resource is too slow. Many cloud providers do not provide APIs to query multiple resources at once, and the round trip time for each resource is hundreds of milliseconds. On top of this, cloud providers almost always have API rate limiting so Terraform can only request a certain number of resources in a period of time. Larger users of Terraform make heavy use of the -refresh=false flag as well as the -target flag in order to work around this. In these scenarios, the cached state is treated as the record of truth. 4. Syncing In the default configuration, Terraform stores the state in a file in the current working directory where Terraform was run. This is okay for getting started, but when using Terraform in a team it is important for everyone to be working with the same state so that operations will be applied to the same remote objects. Remote state is the recommended solution to this problem. With a fully-featured state backend, Terraform can use remote locking as a measure to avoid two or more different users accidentally running Terraform at the same time, and thus ensure that each Terraform run begins with the most recent updated state.
Question # 34
Given the Terraform configuration below, in which order will the resources be created?1. resource "aws_instance" "web_server"2. {3. ami = "ami-b374d5a5"4. instance_type = "t2.micro"5. }6. resource "aws_eip" "web_server_ip"7. {8. vpc = true instance = aws_instance.web_server.id9. }
A. aws_eip will be created first aws_instance will be created second B. aws_eip will be created first aws_instance will be created second C. Resources will be created simultaneously D. aws_instance will be created first aws_eip will be created second
Answer: D
Explanation: Implicit and Explicit Dependencies By studying the resource attributes used in interpolation expressions, Terraform can automatically infer when one resource depends on another. In the example above, the reference to aws_instance.web_server.id creates an implicit dependency on the aws_instance named web_server. Terraform uses this dependency information to determine the correct order in which to create the different resources. # Example of Implicit Dependency resource "aws_instance" "web_server" { ami = "ami-b374d5a5" instance_type = "t2.micro" } resource "aws_eip" "web_server_ip" { vpc = true instance = aws_instance.web_server.id } In the example above, Terraform knows that the aws_instance must be created before the aws_eip. Implicit dependencies via interpolation expressions are the primary way to inform Terraform about these relationships, and should be used whenever possible. Sometimes there are dependencies between resources that are not visible to Terraform. The depends_on argument is accepted by any resource and accepts a list of resources to create explicit dependencies for. For example, perhaps an application we will run on our EC2 instance expects to use a specific Amazon S3 bucket, but that dependency is configured inside the application code and thus not visible to Terraform. In that case, we can use depends_on to explicitly declare the dependency: # Example of Explicit Dependency # New resource for the S3 bucket our application will use. resource "aws_s3_bucket" "example" { bucket = "terraform-getting-started-guide" acl = "private" } # Change the aws_instance we declared earlier to now include "depends_on" resource "aws_instance" "example" { ami = "ami-2757f631" instance_type = "t2.micro" # Tells Terraform that this EC2 instance must be created only after the # S3 bucket has been created. depends_on = [aws_s3_bucket.example] } https://learn.hashicorp.com/terraform/getting-started/dependencies.html
Question # 35
True or False? Each Terraform workspace uses its own state file to manage theinfrastructure associated with that particular workspace.
A. False B. True
Answer: B
Explanation: Explanation
The persistent data stored in the backend belongs to a workspace. Initially, the backend
has only one workspace, called "default", and thus there is only one Terraform state
associated with that configuration.
Question # 36
Your team uses terraform OSS . You have created a number of resuable modules forimportant , independent network components that you want to share with your team toenhance consistency . What is the correct option/way to do that?
A. Terraform modules cannot be shared in OSS version . Each developer needs tomaintain their own modules and leverage them in the main tf file. B. Upload your modules with proper versioning in the terraform public module registry .Terraform OSS is directly integrated with the public module registry , and can reference themodules from the code in the main tf file. C. Terraform module sharing is only available in Enterprise version via terraform privatemodule registry , so no way to enable it in OSS version. D. Store your modules in a NAS/ shared file server , and ask your team members todirectly reference the code from there. This is the only viable option in terraform OSS,which is better than individually maintaining module versions for every developer.
Answer: B
Explanation:
Software development encourages code reuse through reusable artifacts, such as libraries,
packages and modules. Most programming languages enable developers to package and
publish these reusable components and make them available on a registry or feed. For
example, Python has Python Package Index and PowerShell has PowerShell Gallery.
For Terraform users, the Terraform Registry enables the distribution of Terraform modules,
which are reusable configurations. The Terraform Registry acts as a centralized repository
for module sharing, making modules easier to discover and reuse.
The Registry is available in two variants:
* Public Registry houses official Terraform providers -- which are services that interact with
an API to expose and manage a specific resource -- and community-contributed modules.
* Private Registry is available as part of the Terraform Cloud, and can host modules
Terraform Enterprise (also referred to as pTFE) requires what type of backend database fora clustered deployment?
A. PostgreSQL B. Cassandra C. MySQL D. MSSQL
Answer: A
Explanation: Explanation
External Services mode stores the majority of the stateful data used by the instance in an
external PostgreSQL database and an external S3-compatible endpoint or Azure blob
storage. There is still critical data stored on the instance that must be managed with
snapshots. Be sure to check the PostgreSQL Requirements for information that needs to
be present for Terraform Enterprise to work. This option is best for users with expertise
managing PostgreSQL or users that have access to managed PostgreSQL offerings like
AWS RDS.
Question # 38
Using multi-cloud and provider-agnostic tools provides which of the following benefits?
A. Operations teams only need to learn and manage a single tool to manage infrastructure,regardless of where the infrastructure is deployed. B. Increased risk due to all infrastructure relying on a single tool for management. C. Can be used across major cloud providers and VM hypervisors. D. Slower provisioning speed allows the operations team to catch mistakes before they areapplied.
Answer: A,C
Explanation:
Using a tool like Terraform can be advantageous for organizations deploying workloads
across multiple public and private cloud environments. Operations teams only need to learn
a single tool, single language, and can use the same tooling to enable a DevOps-like
experience and workflows.
Question # 39
Your team has started using terraform OSS in a big way , and now wants to deploy multiregion deployments (DR) in aws using the same terraform files . You want to deploy thesame infra (VPC,EC2 …) in both us-east-1 ,and us-west-2 using the same script , and thenpeer the VPCs across both the regions to enable DR traffic. But , when you run your script ,all resources are getting created in only the default provider region. What should you do?Your provider setting is as below -# The default provider configuration provider "aws" { region = "us-east-1" }
A. No way to enable this via a single script . Write 2 different scripts with different defaultproviders in the 2 scripts , one for us-east , another for us-west. B. Create a list of regions , and then use a for-each to iterate over the regions , and createthe same resources ,one after the one , over the loop. C. Use provider alias functionality , and add another provider for us-west region . Whilecreating the resources using the tf script , reference the appropriate provider (using thealias). D. Manually create the DR region , once the Primary has been created , since you areusing terraform OSS , and multi region deployment is only available in TerraformEnterprise.
Answer: C
Explanation:
You can optionally define multiple configurations for the same provider, and select which
one to use on a per-resource or per-module basis. The primary reason for this is to support
multiple regions for a cloud platform; other examples include targeting multiple Docker
hosts, multiple Consul hosts, etc.
To include multiple configurations for a given provider, include multiple provider blocks with
the same provider name, but set the alias meta-argument to an alias name to use for each
additional configuration. For example:
# The default provider configuration
provider "aws" {
region = "us-east-1"
}
# Additional provider configuration for west coast region
What are some of the problems of how infrastructure was traditionally managed beforeInfrastructure as Code? (select three)
A. Requests for infrastructure or hardware required a ticket, increasing the time required todeploy applications B. Traditional deployment methods are not able to meet the demands of the modernbusiness where resources tend to live days to weeks, rather than months to years C. Traditionally managed infrastructure can't keep up with cyclic or elastic applications D. Pointing and clicking in a management console is a scalable approach and reduceshuman error as businesses are moving to a multi-cloud deployment model
Answer: A,B,C
Explanation:
Businesses are making a transition where traditionally-managed infrastructure can no
longer meet the demands of today's businesses. IT organizations are quickly adopting the
public cloud, which is predominantly API-driven. To meet customer demands and save
costs, application teams are architecting their applications to support a much higher level of
elasticity, supporting technology like containers and public cloud resources. These
resources may only live for a matter of hours; therefore the traditional method of raising a
ticket to request resources is no longer a viable option Pointing and clicking in a
management console is NOT scale and increases the change of human error.
Question # 41
Multiple provider instances blocks for AWS can be part of a single configuration file?
A. False B. True
Answer: B
Explanation:
You can optionally define multiple configurations for the same provider, and select which
one to use on a per-resource or per-module basis. The primary reason for this is to support
multiple regions for a cloud platform; other examples include targeting multiple Docker
hosts, multiple Consul hosts, etc.
To include multiple configurations for a given provider, include multiple provider blocks with
the same provider name, but set the alias meta-argument to an alias name to use for each
additional configuration. For example:
# The default provider configuration
provider "aws" {
region = "us-east-1"
}
# Additional provider configuration for west coast region
provider "aws" {
alias = "west"
region = "us-west-2"
}
The provider block without alias set is known as the default provider configuration. When
alias is set, it creates an additional provider configuration. For providers that have no
required configuration arguments, the implied empty configuration is considered to be the
The following is a snippet from a Terraform configuration file:Which, when validated, results in the following error:Fill in the blank in the error message with the correct string from the list below.
A. Rewrite Terraform configuration files to a canonical format and style. B. Deletes the existing configuration file. C. Updates the font of the configuration file to the official font supported by HashiCorp. D. Formats the state file in order to ensure the latest state of resources can be obtained.
Answer: A
Explanation:
The terraform fmt command is used to rewrite Terraform configuration files to a canonical
format and style. This command applies a subset of the Terraform language style
conventions, along with other minor adjustments for readability.
Other Terraform commands that generate Terraform configuration will produce
configuration files that conform to the style imposed by terraform fmt, so using this style in
What feature of Terraform Cloud and/or Terraform Enterprise can you publish and maintaina set of custom modules which can be used within your organization?
A. Terraform registry B. custom VCS integration C. private module registry D. remote runs
Answer: C
Question # 45
Your company has a lot of workloads in AWS , and Azure that were respectively createdusing CloudFormation , and AzureRM Templates. However , now your CIO has decided touse Terraform for all new projects , and has asked you to check how to integrate theexisting environment with terraform code. What should be your next plan of action?
A. Tell the CIO that this is not possible . Resources created in CloudFormation , andAzureRM templates cannot be tracked using terraform. B. Use terraform import command to import each resource one by one . C. This is only possible in Terraform Enterprise , which has the TerraformConverter exethat can take any other template language like AzureRM and convert to Terraform code. D. Just write the terraform config file for the new resources , and run terraform apply , thestate file will automatically be updated with the details of the new resources to be imported.