1. What is Auto Event Fire up?
Auto Event Wire up is an attribute in Page directive. It is a Boolean attribute, which indicates whether the asp.net pages events are auto-wired. When we set the value of the Auto Event Wire up attribute to true, the ASP.NET runtime does not require events to specify event handlers like Page_Load or Page_Init.If we set the value of this attribute to true, framework must make a call to relate Delegate method for every Web Form (.aspx page). Therefore, it will be a performance issue and should not set the value to true if performance is a key issue.
The .NET Framework provides a run-time environment called the Common Language Runtime, which manages the execution of code and provides services that make the development process easier. Compilers and tools expose the runtime's functionality and enable you to write code that benefits from this managed execution environment. Code that you develop with a language compiler that targets the runtime is called managed code; it benefits from features such as cross-language integration, cross-language exception handling, enhanced security, versioning and deployment support, a simplified model for component interaction, and debugging and profiling services.
Pipelining is a process in which the data is accessed in a stage-by-stage process. The data is accessed in a sequence that is each stage performs an operation. If there are n numbers of stages then n number of operations is done. To increase the throughput of the processing network the pipe lining process is done. This method is adopted because the operation or the data is accessed in a sequence with a fast mode.
Storing the frequently used items in memory is referred as Caching. Caching is a used and tested technique for performance
It was a revolutionary concept that was developed by Microsoft for delivering the n-tier architecture. Later it was suffered from backward compatibility issues and then it gave birth to the .net technology, which incorporates the safer execution of code through CLR. DNA is also suffered with SIDE-BY-SIDE execution and importing and plumbing of code while executing the application into our server in asp3.0 but in .net it was completely revealed and the problem was diagnosed. DMA makes and support the two tiers and three tier to n-tier architecture application in which it can be able to communicate and transfer the data as much as it can but practically it was very difficult with "Windows DNA".
6. What is write back and write through caches?
Generally, Cache is using to improve the performance of site. We have two places where we can use caching.
1. on Page Level
2. Data level Caching for saving Database hit.
7. What are superscalar machines and VLIW machines?
As superscalar machines become more complex, the difficulties of scheduling instruction issue become more complex. Another way of looking at superscalar machines is as dynamic instruction schedulers - the hardware decides on the fly which instructions to execute in parallel, out of order, etc.
An alternative approach would be to get the compiler to do it beforehand - that is, to statically schedule execution. This is the basic concept behind Very Long Instruction Word, or VLIW machines.
8. What are different stages of a pipe?
There are two types of pipelines-
Instructional pipeline where different stages of an instruction fetch and execution are handled in a pipeline.
Arithmetic pipeline are different stages of an arithmetic operation are handled along the stages of a pipeline.
9. How is a block found in a cache?
Each place in cache records block's tag (as well as its data) Of course, place in cache may be unoccupied, so usually place maintains valid bit so to find block in cache:
Use index of block address to determine place (or set of places)
For that (or each) place, check valid bit is set and compare tag with that of block address --- this can be done in parallel for all places in a set.
10. How do you improve the cache performance?
Caching can be done with so many ways like page caching, output caching etc. With ASP.NET, we can achieve this efficiently and by using CLR, which shows a direct impact on the Caching feature in .NET technology.
11. What is bus contention and how do you eliminate it?
Bus contention occurs when more than one memory module attempts to access the bus simultaneously. It can be reduced by using hierarchical bus architecture.
When considering the reconstruction of a signal, we are familiar with the idea of the Nyquist rate. This concept allows us to find the sampling rate that will provide for perfect reconstruction of our signal. If we sample at too low of a rate (below the Nyquist rate), then problems will arise that will make perfect reconstruction impossible - this problem is known as aliasing. Aliasing occurs when there is an overlap in the shifted, periodic copies of our original signal's FT, i.e. spectrum.
13. What is the difference between a latch and a flip?
Latches are Level Sensitive, while Flip-Flops are Edge Sensitive. A positive level latch is transparent to the positive level (enable), and it lathes the final input before it is changing its level (i.e. before enable goes to '0' or before the clock goes to -ve level.
A positive edge flop will have its output effective when the clock input changes from '0' to '1' state ('1' to '0' for negedge flop) only.
14. What is the race around condition? How can it be overcome?
Race conditions are a severe way crashing the server/ system at times. Generally, this problem arises in priority less systems or the users who have equal priority will be put to this problem. Race condition is a situation in which a resource D is to be serviced to a process A and the process B that holds the resource C is to be given to the process A. So a cyclic chain occurs and no way the resources will be get shared and also the systems with equal priority wont get the resource so that the system wont come out of the blocked state due to race condition!
15. What is the purpose of cache? How is it used?
Caching is often considered as a performance-enhancement tool than a way to store application data. If u spends more server resources in accessing the same data repeatedly, use caching instead. Caching data can bring huge performance benefits ,so whenever u find that u need to frequently access data that doesn't often change, cache it in the cache object and your application's performance will improve.
16. What are the types of memory management?
Memory Management is a crucial role in every operating system. Memory management is there are many types such as
1. Storage memory Management
2. I/O Memory Management
17. What are different pipelining hazards and how are they eliminated?
Pipeline is a process where a business object goes through several stages asynchronously, where one stage picks up processes and drops it for the next process to pick up. The hazard is when the different thread of the same process picks up the business object leads to malfunction. This can be handled by status handling or scan delays.