반응형

불확실한 것 투성이인 S/W 개발을 정량화 하여 언제 완료 된다고 감히 말한다는 것은 참으로 어려운 일이다.
더군다나 본인이 직접 개발을 하는 상황이 아니라 여러 팀원들과 진행하면서 "다른 사람이 이일을 얼마만에 해 낼 것이다" 라고 추정하기란 더더욱 어렵다.

프로젝트 미팅을 들어가면 항상 이런 패턴이다.

프로덕트 기획자: 우리 이러한 제품을 만들어야 하는데요. 개발실에서는 어떻게 생각하시나요.
             ( 프리젠테이션을 멋지게 보여준다.)
개발자들 : <속으로>허걱~ 완전 상상의 나래를 펼치셨네.
개발자들 끼리 의견 분분~
개발자 중 한사람: 그거 음 이렇게 하면 되지 않을까요?
                    (솰라 솰라~~ 구현에 대해서 자신이 생각하는 방향을 열씸히 예기 한다.)
다른 개발자: 음 그런데 지금 구조에서는 잘 안될것 같은데 새로 뭔가 만들거나, 지금껄 많이 엎고 작업 다시 하는 것도 생각해봐야 겠네요.

그리고 한참동안 개발자들은 나름대로의 상상속에 빠져서 자기가 생각하는 구현 방법, 문제점 등등을 즉시 예기하기 시작한다. ㅎ
이미 설계는 시작되었다~~~!!!!
이 와중 찬물을 끼언는 소리~~~~

프로젝트 리더: 그럼 언제까지 돼?
개발자들 : ...


참으로 언제까지 된다는 예기를 하기가 점점 힘들어지는 것이 개발자의 입장이다..!!!
경험이 쌓이면 쌓일 수록 쉬워야 되는 것인데도 불구하고 점점 요구되는 S/W의 스팩도 높아지고, 복잡도도 높아지고 있어서 결정하기가 점점 어려워지게 된다.

이런 업무 범위 추정과 계획은 프로젝트의 성공과 상품화 전략을 위해서 매우 중요한 부분이다.

에자일에서는 이러한 업무의 process를 나누는 단위를 "스토리","이터레이션" 이라는 이름으로 사용하고있다.

스토리는 업무의 조각에 해당하는 것이고, 이터레이션은 수행기간을 단위화 한 것이다.

프로젝트의 초기 셋업에서 가장 중요한 것은 개발 일정 산출일 것이다.
하지만 개발일정이라는 것은 각 모듈의 개발 기간과 투입 리소스를 산출해야 하는데 이를 정확히 산출해 내서 계획을 새운다는 것은 사실상 리스크를 더 갖게 만든다.
이유는 개발 기간을 산출하는 사람이 개발자라 할지라도 개발 기간중 어떻한 일이 발생할 것인 지도 모르는 상황에서 이건 언제 끝나고 이건 언제 끝난다 라고 답하는 것은 힘들기 때문이다.
또 만약 개발자가 개발해본 경험이 없는 모듈이라고 한다면 사실 개발 기간을 정리 하는 것도 무리일 수가 있다.

그럼에도 불구하고 프로젝트가 움직 이기 위해서는 개발팀은 개발기간을 산출 해야 한다.

이런 확실하지 않은 개발 기간 산출을 "추정"이라 하고 이를 얼마나 효과적으로 가능한한 현실 적으로 정리 하는 가를 "추정의 기술" 이라 하겠다.
(마치 "싸움의 기술" 같은 느낌?ㅎ )

우선 프로젝트의 기간을 추정하는 작업은
가장 먼저 프로젝트의 규모를 추정하는 작업에서 시작된다.
만약 이전에 이미 팀의 프로젝트 진행 속도를 알고 있다면, 추정하기가 편해진다.
스토리 단위로 개발을 진행 한 경험이 있다고 한다면, 팀의 속도가 더욱더 분명해진다.

이로 인해 개발전에 프로젝트 전체의 추정치가 나오게 되고, 실제 개발에 들어가게 되면, 이 추정치를 바탕으로 실제 개발 범위와의 오차를 조절해야 한다.

예를 들면 , 이러한 것이다.

자동 운전 시스템을 개발하고자 할때 Story를 다음과 같이 정했다고 하자.
운전 시스템 모듈 100점
도로 주행 모듈 150점
주행 경로 모듈 50점

이렇게 총 300점의 개발 프로젝트 라고 초기에 추정했다고 하자. 그래서 이전에 개발팀의 속도로 보아 100 점을 4회의 이터레이션에 처리했다면 약 12회의 이터레이션에 해결할 수 있는 프로젝트 라고 생각 할 것이다.(1회에 25점의 스토리를 처리했다.)

그런데 막상 개발을 시작하고 보니, 1회 이터레이션에 10점 정도의 스토리 밖에 처리가 되지 않았다 라고 한다면, 우리는 15점을 처리 못하고 남겨 놨다는 것을 알게된다. 때문에 우리 개발기간은 1.5배 길어질 것이라는 것을 예상 할 수 있다.

그렇다면 초기 추정치가 쓸모 없는 것이 되는가?
그렇지 않다! 그렇게 되도록 둬서는 안된다.
무슨 의미인가 하면 우리는 흔히 개발 spec을 산출할때 기간을 중심에 두는 경우가 많다.
이건 몇일 걸리고 이건 몇일 걸리고, 이런 식으로 말이죠.
이렇게 생각해서 스토리에 점수를 줬다고 한다면, 많은 부분이 다시 수정되어야 한다.
그러면 스토리 점수는 무엇이어야 하는가?
일의 크기로 생각해야 한다. 일의 범위는 모듈의 개수,class의 수, 코드량, 난이도, 등등을 생각해서 점수를 줬다면
전체적인 일의 크기가 정해지게 된다. 일의 크기가 얼마인가와 이를 해결하는데 얼마가 걸리는 것인지는 다르다.

일의 크기를 단위로 나누게 되면, 각 스토리 간의 난이도 비교가 된다.
즉 10의 점수를 갖는 스토리는 5의 점수를 갖는 스토리보다 배는 어렵다는 것이다.
또 10의 점수를 갖는 스토리는 20의 점수를 갖는 스토리보다 반은 쉽고 간결하다는 것이 된다.

이는 10의 점수를 갖는 스토리가 얼마가 걸리는가 하는 문제와는 별개라는 것을 다시한번 강조하고 싶다.

이전 프로젝트에서는 25의 점수가 1 이터레이션이 걸렸지만 , 이번 프로젝트에서는 10의 점수가 1이터레이션이 걸릴 수도 있다는 것이다.
(대부분 반대의 경우가 많다. 이유는 프로젝트 경험이 쌓일 수록 개발 속도는 빨라지기 때문에 지난번 프로젝트에서는 20점이 1 이터레이션이었지만 이번에는 30점을 1이터레이션 내에 끝낼 수 있을 것이다.)

또 점수를 측정함에 있어서도 개발자의 질이 향상되면 그만큼 점수도 유동적으로 변하게 된다.




추정치의 값은 이런 값을 사용해라.

1,2,3,5, 그리고 8 
1,2,4, 그리고 8  큰 단위의 작업에 대한 추정을 할때 , 이 값을 사용하게 되면 보다 큰 불확실성에 대한 대처가 쉽다.

숫자의 간격이 있게되면, 장점은 이렇하다, 3보다는 크고 4보다는 작은 약 3.5 정도 되는 것 같은 스토리의 경우에, 3이라고 작성하면 된다.
이렇게 함으로 해서 복잡해지는 스토리 점수를 막을 수 있다.
실제 우리 업무가 3이라고 작성했지만 3.5가 될 소지가 매우 크기 때문이다.

'0'이라는 값을 추정치에 포함하겠다라고 하면 어떤 일을 0으로 정할 것인지를 고민해야 한다.
(실제로 0을 포함 시키면 프로젝트 추정에 많은 도움이 될 수 있다.)

0이라는 값은, 작업은 필요하지만 매우 간단하여 프로젝트 수행에 영향을 미치지 않는 것을 의미한다.
(또는 자동으로 해결되는것 - 이런 경우는 매우 적지요)

0이라는 값을 사용함에 있어서 먼저 팀원들의 인식을 확실히 해야 한다.
0 이라고 해서 일이 없는 것이 아니기 때문에, 0점짜리 스토리를 한꺼번에( 13x0해서 한 이터레이션 안에서 ) 처리 할 수는 없다.
0은 공짜 점심처럼 한 이터레이션 안에 처리 할 수 있는 작은 업무이지만, 공짜 점심은 개수 재한이 있다는 것을 인식 시킬 필요가 있습니다.
만약 이렇게 많은 0점짜리 스토리를 한번에 처리하고 싶다면, 이들을 묶어서 1점 이상 되는 이터레이션으로 묶어야 합니다.


반응형

 

[Gesture Events]

When the user switches from a one-finger tap to using two-fingers, it's considered the beginning of gesture.

This causes gesture events created, which can be interrupted by overriding the appropriate methods

 

 

mouseDown  ---> gestureStarted   ----> gestureChanged ----> gestureEnded ----> mouseUp



위와 같이 첫 touch는 mouse처럼 동작시키지만 2번째 touch 부터는 gesture event로 동작시키고 gestureChanged 를 통해 어떤 action인지를 판단하는 것으로 보인다.

--> 이전까지의 내 생각

2009.12.06
만약 위와같이 첫번째 touch가 mouseDown이고 2번째 touch 부터 gesture 라고 하게 되면, 개발자의 코드는 mouseDown과 getsure쪽에 동시에 코드을 추가해야 한다.
이를 방지하기 위해서는 mouseDown 과 gestureStarted를 동시에 첫번째 touch때 보내는 것이다.
그렇게 되면, multi-touch 용 application들은 따로 mouseDown을 사용하는 것이 아니라 gestureStarted를 사용하게되고,
그렇지 않은 application들은 그냥 기존처럼 mouseDown event만 받아서 처리하면 된다.



반응형

읽어보면 읽어볼수록 건질것이 많은 책이다.

흥미 위주로의 주제들만 잠깐 읽어 봤는데 내용을 간추려 보면 다음과 같다.


Chaper 1.사용자 인터페이스 설계의 초기 진출

[컴퓨터를 끄기 위해 왜 시작 버튼을 눌러야만 하는가?]
윈도우 95시절, 작업 표시줄에 시작 버튼이 없었다.(당시 작업표시줄로 불리지도 않았다).

시작 버튼 대신 시스템 버튼(아이콘은 윈도우 깃발임), 찻기 버튼 그리고 도움말 버튼 이렇게 3개의 버튼이 화면좌측 하단에 표시되었다.
시간이 지나면서, 찻기와 도움말 버튼이 시스템 버튼에 통합되었다.
창 배열 과 같은 메뉴는 사용자 인터페이스의 다른 부분으로 옮겨갔고, 작업 목록과 같은 메뉴항목은 완전히 사라졌다.

유용성 테스트에서 들어난 가장 큰 문제점은 사용자가 컴퓨터를 켜고 그 다음에 무엇을 해야 할지를 잘 모른다는 것이었다.

시스템 메뉴에 '시작'이라는 라벨을 붙이자고 누군가가 제안한 것은 바로 이때였다.
그 뜻은 '여기를 누르세요'라고 말하는 것이었다. 이 간단한 변경으로 유용성 테스트 결과는 극적으로 개선되었는데, 사용자들이 무언가 하고자 할 때 무엇을 클릭해야 하는지를 이제 알게 되었기 때문이었다.
사용자들에게 컴퓨터를 꺼달라고 했을 때 이들은 시작 버튼을 클릭했다. 왜냐하면, 컴퓨터를 끄기위해서는 어디에선가 시작을 해야 했기 때문이다.

심지어 pc를 끌대도 '시작'을 누른다는 것... 이것은 아이러니한것인지 정상적인것인지 모르겠지만, 사용성은 정말 심플한 아이디어에 의해서도
극대화 될수 있다는 예기입니다.


[옵션을 언제 비활성화 시키며, 언제 제거할 것인가?]
메뉴 항목이나 대화상자 옵션을 표시할 때 이들은 사용할 수 없다면, 이것을 비활성화하거나 제거할 수 있다. 이때 적용해야 할 규칙은 무엇일까?

어떤것이 보이기는 하지만 사용할 수 없을 때 사용자들은 어떻게 조작하면 이것을 활성화 시킬수 있을 것이라고 기대한다는 것이 실험에서 밝혀졌다.

따라서 사용자가 기능을 활성화 할수 있는 무엇인가가 있다면, 메뉴 항목을 보이되 비활성화 시켜야 한다.
예를 들면 매체 재생 프로그램에서 매체 파일이 재생 중이 아니면 정지 옵션은 비활성화 된다. 그리고 재생을 시작할 때 이 옵션이 활성화 된다.

반면에, 어떤 이유로 인해 사용자가 전혀 조작할 수 없는 옵션이 있다면 이것을 제거해야 한다.
그냥 둔다면 사용자들은 이를 활성화 시키는 방법을 찾으려고 시간을 낭비할 것이다.
예를 들면, 컬러 인쇄가 불가능한 프린터에 대해서는 색상 조정 옵션이 보여서는 안 되는데, 이것은 이 프로그램으로는 어떻게 하더라도 프린터에서 컬러 인쇄가 되도록 만들수 없기 때문이다.

비슷하게 , 텍스트 어드벤처 게임을 생각해 보자.' 벽에서 횃불을 집어라' 와 같은 어떤 명령어를 내리면 컴퓨터는 '아직 그건 할 수 없다'라고 답변한다. 메뉴 항목을 홰색으로 만드는 것에 대응되는 어드벤처 게임의 처리는 이와 같다. 사용자는 다음과 같은 고민을 한다."음, 의자가 필요한가? 아니면 횃불이 너무 뜨거운가? 너무 많은 도구를 가져가서 그런가? 어쩌면 다른 캐릭터를 찾아서 이 일을 맡겨야 할지도 몰라."

만약 횃불이 제거할 수 없는 것이었다면 사용자로 하여금 쓸데없는 노력을 하게 만든다. 어드벤처 게임에서는 이런 좌절감이 재미의 일부이지만, 컴퓨터 프로그램에서의 좌절감은 사람들이 즐길 수 있는 것이 아니다.

이것은 엄격한 규칙이 아니라 단지 하나의 지침이라는 점에 유의하기 바란다.
 다른 고려 사항들이 이 원칙에 우선할 수 있다. 예를 들어, 일관성 있는 메뉴 구조가 더 바람직하다고 생각할 수 있는데,
왜냐하면 이것이 덜 혼란스럽기 때문이다
(예를 들어, 매체 재생 프로그램은 음악파일을 재생할 때 동영상 관련 옵션들을 표시하기는 하지만 비활성화 시킬 수 있다.)


책을 읽다 공감이 가는 내용이고 그동안 항상 봐왔지만 생각해보지 못했던 내용이라 이렇게 블로그에 옮겨 적었습니다.
(copy & paste 아닙니다. 손으로 카피 했습니다. ㅠ_ㅠ)


반응형

본인이 최근에 겪은 일중 하나가

기존에 짜여져 있던 코드가 Timer를 이용하여 text sliding을 구현되어져 있던 부분이 있었다.

그런데 기존에 timer의 코드가 Callback으로 되어있었는데 아무생각 없이 컨버젼을 하다보니.

헉... Timer callback을 어떻게 구현해야 하지? 라는 벽에 부딛치게 되었다.


기존 코드
KeyPressedProc()
{
  hSlideTimer = CreateTimer("test", SlideTimerCallback);
  StartTimer(hSlideTimer);
}


BOOL SlideTimerCallback(HTimer hTimer,int param)
{
   /*Implement text sliding*/
return TRUE;
}

C++ 컨버젼 코드
/**/
class TextView;

TextView::keyPress()
{
    hTimer = CreateTimer("test",흐걱!!!!!!!!);
    StartTimer(hTimer );
}

BOOL TextView::SlideTimer(HTimer hTimer,int param)
{
   /*Implement text sliding*/
return TRUE;
}


결국 임시로 TextView::SlideTimer를 static 함수로 바꿨다는 ...  ㅡㅡ;;;


--------------------------------------------------------------------------
TextView::keyPress()
{
    hTimer = CreateTimer("test",TextView::SlideTimer, static_cast(this));
    StartTimer(hTimer );
}


static BOOL TextView::SlideTimer(HTimer hTimer, int param)
{

TextView * pView = static_cast(param);
:
:
}

위와 같이 변경해서 사용하려니 이번에 문제는timer 가 expire되어 SlideTimer callback에 들어왔을때
최악의 case인 TextView로 만들었던 Instance가 사라졌다면? 을 고려해야 하는데.

이렇게 되면 C에서 하던것 처럼 instance들을 handle로 관리하고 handle로 접근하도록 변경해야 만 해결할 수 있게 된다.!!

이와 같은 경험을 통해 CTimer 라는 class를 만들고 이 Timer Subscribe를 생성하여, 객체가 timeout event를 받을 수 있도록 구현할 필요가 있다.



반응형


핸드폰을 사용할때 우리는 자주 쓰는 메뉴는 머리속으로 기억하게 되고,
이를 연속으로 입력해서 하고자 하는 어플리케이션 단시간에 띄우게 된다.

예를 들면, 메뉴 -> 5, 5,1 을 누르면 즐겨하는 게임이 뜬다. Bomb link?? ^^;;


S/W 내부에서는 이러한 일이 가능한 이유는 매우 간단한데 key를 받아서 처리해야 하는 대상의 결정이 key가 눌리는 시점에서 판단하는 것이 아니라, key event를 Event Queue에서 꺼내서 가장 top에 있는 녀석에게  처리하라고 던져주기 때문이다.

그 말인 즉, 5,5,1 이 순차적으로 발생하는 것은 특별한 기능이 아니라 단지 process가 느려서 지연에 의해 이렇게 보이는 것 뿐이다는 것이다.

Multi process(task) 환경에서는 좀 다를 수 가 있다.
일단 2개이상의 어플리케이션이 프로세스로 떠있을때, H/W interrupt에 의해 key event가 발생하면,
key event를 어떤 task의 Queue에 넣을지 결정해야 한다.
대부분 현재 top에 떠있는 task의 Queue에 넣게 되는데, 만약 Queue에 넣었으나 application 의 order가 변경되어 다른 어플이 위로 올라오게 되면, 이전에 떠있던 application은 key 처리를 않고 무시하면된다.

대신 이런 환경에서는 아까 예기 했던 5,5,1 같은 빠른 입력이 application 여러개를 거쳐가는 경우에는 불가능 할 수도 있다.



다른 관점에서 살펴보면,,
과연 연속적인 키처리가 과연 유용할까?  하는 부분에서 고민해본다.
application loading이 늦어져서 key 입력이 5,5,1 로 끝나는 것이아니라. end key, 5,4,1, menu 키 .. 이런식으로 복잡하게 눌렸고 이를 다 처리하려 한다면,,  심한 짜증감을 불러이르킬 것이다.

그래서 주로 key event의 개수를 재한한다. 한 3~5개 정도로?  (개인적으로는 2개면 충분하다고 생각하지만.)

이렇게 재한을 했을경우 또다른 문제가 발생할 수가 있는데,, 중간에 key가 사라지고 마지막 키만 남을 수도 있다.
무슨 말이냐 하면 전화를 걸려고 010-5555-2324 를 눌렀다.
근데 아주 빨리 눌러서 010,5,52,24  이렇게 중간 중간에 키가 빠질 수도 있다는 의미이다.


지금까지 전 불편함을 못느낀 기능이지만, 이런 문제들도 가끔 나옵니다.



반응형


오늘 DLL 관련 정보들과 Custom Control 자료를 MSDN에서만 찾아봤는데.. 역시 상용 OS는 아무나 만드는 것이 아니야!!..
라는 생각을 하게 되네....
SubclassWindow
SubclassDlgItem

결국 Dialog는 예외 처리가 필요한건가???


[MSDN 참조]

http://msdn.microsoft.com/ko-kr/library/bk2h3c6w(VS.80).aspx
This note describes the MFC Support for custom and self-drawing controls. Dynamic subclassing is also described. General advice on ownership of CWnd objects vs. HWNDs is presented.

The MFC sample application CTRLTEST illustrates many of these features. Please refer to the source code for the MFC General sample CTRLTEST and online help.

Windows provides support for "owner draw" controls and menus. These are Windows messages sent to a parent window of a control or menu that allow you to customize the visual appearance and behavior of the control or menu.

MFC directly supports owner draw with the message map entries:

  • CWnd::OnDrawItem

  • CWnd::OnMeasureItem

  • CWnd::OnCompareItem

  • CWnd::OnDeleteItem

You can override these in your CWnd-derived class (usually a dialog or main frame window) to implement the owner-draw behavior.

This approach does not lead to reusable code. If you have two similar controls in two different dialogs, you must implement the custom control behavior in two places. The MFC-supported self-drawing control architecture solves this problem.

MFC provides a default implementation (in CWnd and CMenu) for the standard owner-draw messages. This default implementation will decode the owner-draw parameters and delegate the owner-draw messages to the controls or menu. This is called "self-draw" since the drawing (/measuring/comparing) code is in the class of the control or menu, not in the owner window.

This allows you to build reusable control classes that display the control using "owner draw" semantics. The code for drawing the control, not the owner of the control, is in the control class. This is an object-oriented approach to custom control programming.

  • For self-draw buttons:

    CButton:DrawItem(LPDRAWITEMSTRUCT);
            // draw this button
  • For self-draw menus:

    CMenu:MeasureItem(LPMEASUREITEMSTRUCT);
            // measure the size of an item in this menu
    CMenu:DrawItem(LPDRAWITEMSTRUCT);
            // draw an item in this menu
  • For self-draw list boxes:

    CListBox:MeasureItem(LPMEASUREITEMSTRUCT);
            // measure the size of an item in this list box
    CListBox:DrawItem(LPDRAWITEMSTRUCT);
            // draw an item in this list box
    
    CListBox:CompareItem(LPCOMPAREITEMSTRUCT);
            // compare two items in this list box if LBS_SORT
    CListBox:DeleteItem(LPDELETEITEMSTRUCT);
            // delete an item from this list box
  • For self-draw combo boxes:

    CComboBox:MeasureItem(LPMEASUREITEMSTRUCT);
            // measure the size of an item in this combo box
    CComboBox:DrawItem(LPDRAWITEMSTRUCT);
            // draw an item in this combo box
    
    CComboBox:CompareItem(LPCOMPAREITEMSTRUCT);
            // compare two items in this combo box if CBS_SORT
    CComboBox:DeleteItem(LPDELETEITEMSTRUCT);
            // delete an item from this combo box

For details on the owner-draw structures (DRAWITEMSTRUCT, MEASUREITEMSTRUCT, COMPAREITEMSTRUCT, and DELETEITEMSTRUCT) refer to the MFC documentation for CWnd::OnDrawItem, CWnd::OnMeasureItem, CWnd::OnCompareItem, and CWnd::OnDeleteItem respectively.

For self-drawing menus, you must override both MeasureItem and DrawItem member functions.

For self-drawing list boxes and combo boxes, you must override MeasureItem and DrawItem. You must specify the OWNERDRAWVARIABLE style in the dialog template (LBS_OWNERDRAWVARIABLE and CBS_OWNERDRAWVARIABLE respectively). The OWNERDRAWFIXED style will not work with self-drawing items since the fixed item height is determined before self-drawing controls are attached to the list box. (The Win 3.1 member functions CListBox::SetItemHeight and CComboBox::SetItemHeight can be used to get around this limitation.)

In addition, note that switching to an OWNERDRAWVARIABLE style will affect the NOINTEGRALHEIGHT style. Because the control can not calculate an integral height with variable sized items, the default style of INTEGRALHEIGHT is ignored and the control is always NOINTEGRALHEIGHT. If your items are fixed height, you can prevent partial items from being drawn by specifying the control size to be an integral multiplier of the item size.

For self-drawing list boxes and combo boxes with the SORT style (LBS_SORT and CBS_SORT respectively), you must override the CompareItem member function.

For self-drawing list boxes and combo boxes, DeleteItem is not normally overridden. DeleteItem can be overridden if additional memory or other resources are stored with each list box or combo box item.

The MFC General sample CTRLTEST provides samples of a self-draw menu (showing colors) and a self-draw list box (also showing colors).

The most typical example of a self-drawing button is a bitmap button (a button that shows one, two, or three bitmap images for the different states). This is provided in the MFC class CBitmapButton.

Subclassing is the Windows term for replacing the WndProc of a window with a different WndProc and calling the old WndProc for default (superclass) functionality.

This should not be confused with C++ class derivation (C++ terminology uses the words "base" and "derived" while the Windows object model uses "super" and "sub"). C++ derivation with MFC and Windows subclassing are functionally very similar, except C++ does not support a feature similar to dynamic subclassing.

The CWnd class provides the connection between a C++ object (derived from CWnd) and a Windows window object (also known as an HWND).

There are three common ways these are related:

  • CWnd creates the HWND. The behavior can be modified in a derived class. "Class derivation" is done by creating a class derived from CWnd and created with calls to Create.

  • CWnd gets attached to an existing HWND. The behavior of the existing window is not modified. This is a case of "delegation" and is made possible by calling Attach to alias an existing HWND to a CWnd C++ object.

  • CWnd gets attached to an existing HWND and you can modify the behavior in a derived class. This is called "dynamic subclassing," since we are changing the behavior (and hence the class) of a Windows object at run time.

This last case is done with the member functions:

  • CWnd::SubclassWindow

  • CWnd::SubclassDlgItem.

Both routines attach a CWnd object to an existing Windows HWND. SubclassWindow takes the HWND directly, and SubclassDlgItem is a helper that takes a control ID and the parent window (usually a dialog). SubclassDlgItem is designed for attaching C++ objects to dialog controls created from a dialog template.

Please refer to the CTRLTEST example for several examples of when to use SubclassWindow and SubclassDlgItem.


반응형
MS 에서 만든 Memory - mapped 의 설계

MSDN 에서 가져온 내용입니다. 스터디 할겸  참고 자료로 매우 유용해보이네요.
http://msdn.microsoft.com/ko-kr/library/ms810613(en-us).aspx

Memory Technical Articles
Managing Memory-Mapped Files in Win32
 

Randy Kath
Microsoft Developer Network Technology Group

Created: February 9, 1993


Abstract

Determining which function or set of functions to use for managing memory in your Win32™ application is difficult without a solid understanding of how each group of functions works and the overall impact they each have on the Microsoft® Windows NT™ operating system. In an effort to simplify these decisions, this technical article focuses on the use of the memory-mapped file functions in Win32: the functions that are available, the way they are used, and the impact their use has on operating system resources. The following topics are discussed in this article:

  • Introduction to managing memory in Windows™ operating systems
  • What are memory-mapped files?
  • How are memory-mapped files implemented?
  • Sharing memory with memory-mapped files
  • Using memory-mapped file functions

In addition to this technical article, a sample application called ProcessWalker is included on the Microsoft Developer Network CD. This sample application is useful for exploring the behavior of memory-mapped files in a process, and it provides several useful implementation examples.

Introduction

This is one of three related technical articles—"Managing Virtual Memory in Win32," "Managing Memory-Mapped Files in Win32," and the upcoming "Managing Heap Memory in Win32"—that explain how to manage memory in applications for the Win32™ programming interface. In each article, this introduction identifies the basic memory components in the Win32 programming model and indicates which article to reference for specific areas of interest.

The first version of the Microsoft® Windows™ operating system introduced a method of managing dynamic memory based on a single global heap, which all applications and the system share, and multiple, private local heaps, one for each application. Local and global memory management functions were also provided, offering extended features for this new memory management system. More recently, the Microsoft C run-time (CRT) libraries were modified to include capabilities for managing these heaps in Windows using native CRT functions such as malloc and free. Consequently, developers are now left with a choice—learn the new application programming interface (API) provided as part of Windows version 3.1 or stick to the portable, and typically familiar, CRT functions for managing memory in applications written for Windows 3.1.

With the addition of the Win32 API, the number of choices increases. Win32 offers three additional groups of functions for managing memory in applications: memory-mapped file functions, heap memory functions, and virtual-memory functions. These new functions do not replace the existing memory management functions found in Windows version 3.1; rather, they provide new features that generally make life easier for developers when writing the memory management portions of their applications for Win32.

Figure 1. The Win32 API provides different levels of memory management for versatility in application programming.

In all, six sets of memory management functions exist in Win32, as shown in Figure 1, all of which were designed to be used independently of one another. So which set of functions should you use? The answer to this question depends greatly on two things: the type of memory management you want and how the functions relevant to it are implemented in the operating system. In other words, are you building a large database application where you plan to manipulate subsets of a large memory structure? Or maybe you're planning some simple dynamic memory structures, such as linked lists or binary trees? In both cases, you need to know which functions offer the features best suited to your intention and exactly how much of a resource hit occurs when using each function.

Table 1 categorizes the memory management function groups in Win32 and indicates which of the three technical articles in this series describes each group's behavior. Each technical article emphasizes the impact these functions have on the system by describing the behavior of the system in response to using the functions.

Table 1. Various Memory Management Functions Available in Win32

Memory set System resource affected Related technical article
Virtual memory functions A process's virtual address space
System pagefile
System memory
Hard disk space
"Managing Virtual Memory in Win32"
Memory-mapped file functions A process's virtual address space
System pagefile
Standard file I/O
System memory
Hard disk space
"Managing Memory-Mapped Files in Win32"
Heap memory functions A process's virtual address space
System memory
Process heap resource structure
"Managing Heap Memory in Win32"
Global heap memory functions A process's heap resource structure "Managing Heap Memory in Win32"
Local heap memory functions A process's heap resource structure "Managing Heap Memory in Win32"
C run-time reference library A process's heap resource structure "Managing Heap Memory in Win32"

Each technical article discusses issues surrounding the use of Win32-specific functions. For a better understanding of how the Windows NT™ operating system manages system memory, see "The Virtual-Memory Manager in Windows NT" on the Microsoft Developer Network CD (Technical Articles, Win32 and Windows NT Articles).

What Are Memory-Mapped Files?

Memory-mapped files (MMFs) offer a unique memory management feature that allows applications to access files on disk in the same way they access dynamic memory—through pointers. With this capability you can map a view of all or part of a file on disk to a specific range of addresses within your process's address space. And once that is done, accessing the content of a memory-mapped file is as simple as dereferencing a pointer in the designated range of addresses. So, writing data to a file can be as simple as assigning a value to a dereferenced pointer as in:

*pMem = 23;

Similarly, reading from a specific location within the file is simply:

nTokenLen = *pMem;

In the above examples, the pointer pMem represents an arbitrary address in the range of addresses that have been mapped to a view of a file. Each time the address is referenced (that is, each time the pointer is dereferenced), the memory-mapped file is the actual memory being addressed.

Note   While memory-mapped files offer a way to read and write directly to a file at specific locations, the actual action of reading/writing to the disk is handled at a lower level. Consequently, data is not actually transferred at the time the above instructions are executed. Instead, much of the file input/output (I/O) is cached to improve general system performance. You can override this behavior and force the system to perform disk transactions immediately by using the memory-mapped file function FlushViewOfFile explained later.

What Do Memory-Mapped Files Have to Offer?

One advantage to using MMF I/O is that the system performs all data transfers for it in 4K pages of data. Internally all pages of memory are managed by the virtual-memory manager (VMM). It decides when a page should be paged to disk, which pages are to be freed for use by other applications, and how many pages each application can have out of the entire allotment of physical memory. Since the VMM performs all disk I/O in the same manner—reading or writing memory one page at a time—it has been optimized to make it as fast as possible. Limiting the disk read and write instructions to sequences of 4K pages means that several smaller reads or writes are effectively cached into one larger operation, reducing the number of times the hard disk read/write head moves. Reading and writing pages of memory at a time is sometimes referred to as paging and is common to virtual-memory management operating systems.

Another advantage to using MMF I/O is that all of the actual I/O interaction now occurs in RAM in the form of standard memory addressing. Meanwhile, disk paging occurs periodically in the background, transparent to the application. While no gain in performance is observed when using MMFs for simply reading a file into RAM, other disk transactions can benefit immensely. Say, for example, an application implements a flat-file database file structure, where the database consists of hundreds of sequential records. Accessing a record within the file is simply a matter of determining the record's location (a byte offset within the file) and reading the data from the file. Then, for every update, the record must be written to the file in order to save the change. For larger records, it may be advantageous to read only part of the record into memory at a time as needed. Unfortunately, though, each time a new part of the record is needed, another file read is required. The MMF approach works a little differently. When the record is first accessed, the entire 4K page(s) of memory containing the record is read into memory. All subsequent accesses to that record deal directly with the page(s) of memory in RAM. No disk I/O is required or enforced until the file is later closed or flushed.

Note   During normal system paging operations, memory-mapped files can be updated periodically. If the system needs a page of memory that is occupied by a page representing a memory-mapped file, it may free the page for use by another application. If the page was dirty at the time it was needed, the act of writing the data to disk will automatically update the file at that time. (A dirty page is a page of data that has been written to, but not saved to, disk; for more information on types of virtual-memory pages, see "The Virtual-Memory Manager in Windows NT" on the Developer Network CD.)

The flat-file database application example is useful in pointing out another advantage of using memory-mapped files. MMFs provide a mechanism to map portions of a file into memory as needed. This means that applications now have a way of getting to a small segment of data in an extremely large file without having to read the entire file into memory first. Using the above example of a large flat-file database, consider a database file housing 1,000,000 records of 125 bytes each. The file size necessary to store this database would be 1,000,000 * 125 = 125,000,000 bytes. To read a file that large would require an extremely large amount of memory. With MMFs, the entire file can be opened (but at this point no memory is required for reading the file) and a view (portion) of the file can be mapped to a range of addresses. Then, as mentioned above, each page in the view is read into memory only when addresses within the page are accessed.

How Are They Implemented?

Since Windows NT is a page-based virtual-memory system, memory-mapped files represent little more than an extension of an existing, internal memory management component. Essentially all applications in Windows NT are represented in their entirety by one or more files on disk and a subset of those files resident in random access memory (RAM) at any given time. For example, each application has an executable file that represents pages of executable code and resources for the application. These pages are swapped into and out of RAM, as they are needed, by the operating system. When a page of memory is no longer needed, the operating system relinquishes control over the page on behalf of the application that owns it and frees it for use by another. When that page becomes needed again, it is re-read from the executable file on disk. This is called backing the memory with a file, in this case, the executable file. Similarly, when a process starts, pages of memory are used to store static and dynamic data for that application. Once committed, these pages are backed by the system pagefile, similar to the way the executable file is used to back the pages of code. Figure 2 is a graphical representation of how pages of code and data are backed on the hard disk.

Figure 2. Memory used to represent pages of code in processes for Windows NT are backed directly by the application's executable module while memory used for pages of data are backed by the system pagefile.

Treating both code and data in the same manner paves the way for propagating this functionality to a level where applications can use it, too—which is what Win32 does via memory-mapped files.

Shared Memory in Windows NT

Both code and data are treated the same way in Windows NT—both are represented by pages of memory and both have their pages backed by a file on disk. The only real difference is the file by which they are backed—code by the executable image and data by the system pagefile. Because of this, memory-mapped files are also able to provide a mechanism for sharing data between processes. By extending the memory-mapped file capability to include portions of the system pagefile, applications are able to share data that is backed by the pagefile. Shown in Figure 3, each application simply maps a view of the same portion of the pagefile, making the same pages of memory available to each application.

Figure 3. Processes share memory by mapping independent views of a common region in the system pagefile.

Windows NT's tight security system prevents processes from directly sharing information among each other, but MMFs provide a mechanism that works with the security system. In order for one process to share data with another via MMFs, each process must have common access to the file. This is achieved by giving the MMF object a name that both processes use to open the file.

Internally, a shared section of the pagefile translates into pages of memory that are addressable by more than one process. To do this, Windows NT uses an internal resource called a prototype page-table entry (PPTE). PPTEs enable more than one process to address the same physical page of memory. A PPTE is a system resource, so their availability and security is controlled by the system alone. This way processes can share data and still exist on a secure operating system. Figure 4 indicates how PPTEs are used in Windows NT's virtual addressing scheme.

Figure 4. Prototype page-table entries are the mechanism that permits pages of memory to be shared among processes.

One of the best ways to use an MMF for sharing data is to use it in a DLL (dynamic-link library). The PortTool application serves as a useful illustration. PortTool uses a DLL to provide its porting functionality and relies on the main application for the user interface. The reason for this is simple: Other applications can then also use the DLL functionality. That is, other editors that are programmable can import the porting functionality. Because it is entirely feasible for PortTool to be running while another editor that imports the PortTool DLL is also running, it is best to economize system resources as much as possible between the applications. PortTool does this by using an MMF for sharing the porting information with both processes. Otherwise, both applications would be required to load their own set of porting information while running at the same time, a waste of system resources. The PortTool code demonstrates sharing memory via an MMF in a DLL.


Using Memory-Mapped File Functions

Memory-mapped file functions can be thought of as second cousins to the virtual-memory management functions in Win32. Like the virtual-memory functions, these functions directly affect a process's address space and pages of physical memory. No overhead is required to manage the file views, other than the basic virtual-memory management that exists for all processes. These functions deal in reserved pages of memory and committed addresses in a process. The entire set of memory-mapped file functions are:

  • CreateFileMapping
  • OpenFileMapping
  • MapViewOfFile
  • MapViewOfFileEx
  • UnmapViewOfFile
  • FlushViewOfFile
  • CloseHandle

Each of these functions is individually discussed below, along with code examples that demonstrate their use.

Creating a File Mapping

To use a memory-mapped file, you start by creating a memory-mapped file object. The act of creating an MMF object has very little impact on system resources. It does not affect your process's address space, and no virtual memory is allocated for the object (other than for the internal resources that are necessary in representing the object). One exception, however, is that, if the MMF object represents shared memory, an adequate portion of the system pagefile is reserved for use by the MMF during the creation of the object.

The CreateFileMapping function is used to create the file-mapping object as demonstrated in the example listed below, a portion of PMEM.C, the source module from the ProcessWalker sample application.

case IDM_MMFCREATENEW:
    {
    char    szTmpFile[256];

    /* Create temporary file for mapping. */
    GetTempPath (256, szTmpFile);
    GetTempFileName (szTmpFile,
                     "PW",
                     0,
                     MMFiles[wParam-IDM_MMFCREATE].szMMFile);

    /* If file created, continue to map file. */
    if ((MMFiles[wParam-IDM_MMFCREATE].hFile =
           CreateFile (MMFiles[wParam-IDM_MMFCREATE].szMMFile,
                       GENERIC_WRITE | GENERIC_READ,
                       FILE_SHARE_WRITE,
                       NULL,
                       CREATE_ALWAYS,
                       FILE_ATTRIBUTE_TEMPORARY,
                       NULL)) != (HANDLE)INVALID_HANDLE_VALUE)
        goto MAP_FILE;
    }
    break;

case IDM_MMFCREATEEXIST:
    {
    char   szFilePath[MAX_PATH];
    OFSTRUCT   of;

    /* Get existing filename for mapfile. */
    *szFilePath = 0;
    if (!GetFileName (hWnd, szFilePath, "*"))
        break;

    /* If file opened, continue to map file. */
    if ((MMFiles[wParam-IDM_MMFCREATE].hFile =
            (HANDLE)OpenFile (szFilePath, &of, OF_READWRITE)) !=
                (HANDLE)HFILE_ERROR)
        goto MAP_FILE;
    }
    break;

case IDM_MMFCREATE:
    /* Associate shared memory file handle value. */
    MMFiles[wParam-IDM_MMFCREATE].hFile = (HANDLE)0xffffffff;

MAP_FILE:
    /* Create 20MB file mapping. */
    if (!(MMFiles[wParam-IDM_MMFCREATE].hMMFile =
        CreateFileMapping (MMFiles[wParam-IDM_MMFCREATE].hFile,
                           NULL,
                           PAGE_READWRITE,
                           0,
                           0x01400000,
                           NULL)))
        {
        ReportError (hWnd);
        if (MMFiles[wParam-IDM_MMFCREATE].hFile)
            {
            CloseHandle (MMFiles[wParam-IDM_MMFCREATE].hFile);
            MMFiles[wParam-IDM_MMFCREATE].hFile = NULL;
            }
        }
    break; /* from IDM_MMFCREATE */

In the sample code above, three cases are demonstrated. They represent creating a memory-mapped file by first creating a temporary disk file, creating a memory-mapped file from an existing file, and creating a memory-mapped file out of part of the system pagefile. In case IDM_MMFCREATENEW, a temporary file is created first, before the memory-mapped file. For case IDM_MMFCREATEEXIST, the File Open dialog is used to retrieve a filename, and that file is then opened before the memory-mapped file is created. In the third case, IDM_MMFCREATE, the memory-mapped file is created either using the system pagefile or using one of the standard files created in the two earlier cases.

Notice that the CreateFileMapping function need only be called once for all three different cases. The first parameter to the CreateFileMapping function, hFile, is used to supply the handle to the file that is to be memory-mapped. If the system pagefile is to be used, the value 0xFFFFFFFF must be specified instead. In the above examples, a structure is used to represent both the standard file and memory-mapped file information. In the example above, the hMMFile field in the structure MMFiles[wParam-IDM_MMFCREATE] is either 0xFFFFFFFF (its default value), or it is the value of the file handle retrieved in either of the earlier cases.

In all three cases, the memory-mapped file is specified to be 20 MB (0x01400000) in size, regardless of the size of any files created or opened for mapping. The fourth and fifth parameters, dwMaximumSizeHigh and dwMaximumSizeLow, are used to indicate the size of the file mapping. If these parameters indicate a specific size for the memory-mapped file when memory mapping a file other than the pagefile, the file on disk is fitted to this new size—whether larger or smaller makes no difference. As an alternative, when memory mapping a file on disk, you can set the size parameters to 0. In this case, the memory-mapped file will be the same size as the original disk file. When mapping a section of the pagefile, you must specify the size of the memory-mapped file.

The second parameter to the CreateFileMapping function, lpsa, is used to supply a pointer to a SECURITY_ATTRIBUTES structure. Since memory-mapped files are an object, they have the same security attributes that can be applied to every other object. A NULL value indicates that no security attributes are relevant to your use of the memory-mapped file.

The third parameter, fdwProtect, is used to indicate the type of protection to place on the entire memory-mapped file. You can use this parameter to protect the memory-mapped file from writes by specifying PAGE_READONLY or to permit read and write access with PAGE_READWRITE.

One other parameter of interest is the lpszMapName parameter, which can be used to give the MMF object a name. In order to open a handle to an existing file-mapping object, the object must be named. All that is required of the name is a simple string that is not already being used to identify another object in the system.

Obtaining a File-Mapping Object Handle

In order to map a view of a memory-mapped file, all you need is a valid handle to the MMF object. You can obtain a valid handle in one of several ways: by creating the object as described above, by opening the object with the OpenFileMapping function, by inheriting the object handle, or by duplicating the handle.

Opening a memory-mapped file object

To open a file-mapping object, the object must have been given a name during the creation of the object. A name uniquely identifies the object to this and other processes that wish to share the MMF object. The following portion of code from PORT.C shows how to open a file-mapping object by name.

/* Load name for file-mapping object. */
LoadString (hDLL, IDS_MAPFILENAME, szMapFileName, MAX_PATH);

/* After first process initializes, port data. */
if ((hMMFile = OpenFileMapping (FILE_MAP_WRITE, 
                                FALSE, 
                                szMapFileName)))
    /* Exit now since initialization was already performed by 
       another process. */
     return TRUE;

/* Retrieve path and file for ini file. */
if (!GetIniFile (hDLL, szIniFilePath))
    return FALSE;

/* Test for ini file existence and get length of file. */
if ((int)(hFile = (HANDLE)OpenFile (szIniFilePath, 
                                    &of, 
                                    OF_READ)) == -1)
    return FALSE;

else
    {
    nFileSize = GetFileSize (hFile, NULL);
    CloseHandle (hFile);
    }

/* Allocate a segment of the swap file for shared memory 2*Size 
   of ini file. */
if (!(hMMFile = CreateFileMapping ((HANDLE)0xFFFFFFFF,
                                    NULL,
                                    PAGE_READWRITE,
                                    0,
                                    nFileSize * 2,
                                    szMapFileName)))
    return FALSE;

The OpenFileMapping function requires only three arguments, the most important of these being the name of the object. As shown in the example, the name is simply a unique string. If the string is not unique to the system, the MMF object will not be created. Once the object exists, however, the name is guaranteed for the life of the object.

Also, note in the above example that the MMF object is opened first, possibly before the object has been created. This logic relies on the fact that, if the object does not already exist, the OpenFileMapping function will fail. This is useful in a DLL where the DLL's initialization code is called repeatedly, once for every process that attaches to it.

The sample from PORT.C above occurs in the DLL's initialization code that is called every time a DLL gets attached to another process. The first time it is called, the OpenFileMapping function fails because the object does not already exist. The logic, then, continues execution until it reaches the CreateFileMapping function, and it is there that the object is first created. Immediately after initially creating the object, the PortTool code initializes the data in the file mapping by writing porting-specific information to the memory-mapped file. To do this, the memory-mapped file is created with PAGE_READWRITE protection. All subsequent calls to the DLL's initialization function result in the OpenFileMapping function successfully returning a valid object handle. This way the DLL does not need to keep track of which process is the first to attach to the DLL.

Note that for every process that attaches to the DLL, the object name is retrieved from the same source—a string from the DLL's resource string table. Since the DLL is able to retrieve the object name from its own resource string table, the name is global to all processes, yet no process is actually aware of the name used. The DLL is able to effectively encapsulate this functionality while at the same time providing the benefit of shared memory to each process that attaches to the DLL.

The PortTool example presents a useful context for sharing memory. Yet, keep in mind that any file on disk could have been used in the same way. If an application were to implement some database services to several other applications, it could set up memory-mapped files using basic disk files, instead of the pagefile, and share that information in the same way. And as the first code listing illustrates, a temporary file could be used to share data instead of the pagefile.

Inheriting and duplicating memory-mapped file object handles

Ordinarily, for two processes to share a memory-mapped file, they must both be able to identify it by name. An exception to this is child processes, which can inherit their parent's handles. Most objects in Win32 can be explicitly targeted for inheritance or not. (Some objects are not inheritable, such as GDI object handles.) When creating an MMF object, a Boolean field in the optional SECURITY_ATTRIBUTES structure can be used to designate whether the handle is to be inheritable or not. If the MMF object handle is designated as inheritable, any child processes of the process that created the object can access the object through the same handle as their parent.

Literally, this means the child process can access the object by supplying the same handle value as the parent. Communicating that handle to the child process is another concern. The child process is still another process after all, having its own address space, so the handle variable itself is not transferable. Either some interprocess communication (IPC) mechanism or the command line can be used to communicate handle values to child processes.

Further, the DuplicateHandle function is provided to offer more control as to when handles can be inherited and not. This function can be used to create a duplicate handle of the original and can be used to change the inheritance state of the handle. An application can invoke this function to change an MMF object handle state to inheritable before passing the handle along to a child process, or it can do the opposite—it can take an inheritable handle and preserve it from being inherited.

Viewing Part of a Memory-Mapped File

Once obtained, the handle to the memory-mapped file object is used to map views of the file to your process's address space. Views can be mapped and unmapped at will while the MMF object exists. When a view of the file is mapped, system resources are finally allocated. A contiguous range of addresses, large enough to span the size of the file view, are now committed in your process's address space. Yet, even though the addresses have been committed for the file view, physical pages of memory are still only committed on a demand basis when using the memory. So, the only way to allocate a page of physical memory for a committed page of addresses in your memory-mapped file view is to generate a page fault for that page. This is done automatically the first time you read or write to any address in the page of memory.

To map a view of a memory-mapped file, use either the MapViewOfFile or the MapViewOfFileEx function. With both of these functions, a handle to a memory-mapped file object is a required parameter. The following example shows how the PortTool sample application implements this function.

/* Map a view of this file for writing. */
lpMMFile = (char *)MapViewOfFile (hMMFile, 
                                  FILE_MAP_WRITE, 
                                  0, 
                                  0, 
                                  0);

In this example, the entire file is mapped, so the final three parameters are less meaningful. The first parameter specifies the file-mapping object handle. The second parameter indicates the access mode for the view of the file. This can be FILE_MAP_READ, FILE_MAP_WRITE, or FILE_MAP_ALL_ACCESS, provided the protection on the file-mapping object permits it. If the object is created with PAGE_READWRITE protection, all of these access types are available. If, on the other hand, the file is created with PAGE_READONLY protection, the only access type available is FILE_MAP_READ. This allows the object creator control over how the object can be viewed.

The second and third parameters are used to indicate the low and high halves, respectively, of a 64-bit offset into the memory-mapped file. This offset from the start of the memory-mapped file is where the view is to begin. The final parameter indicates how much of the file is to be viewed. This parameter can be set to 0 to indicate that the entire file is to be mapped. In that case, the 64-bit offset value is ignored.

The function returns a pointer to the location in the process's address space where the file view has been mapped. This is an arbitrary location in your process, depending on where the contiguous range of addresses are available. If you want to map the file view to a specific set of addresses in your process, the MapViewOfFileEx function provides this capability. This function simply adds an additional parameter, lpvBase, to indicate the location in your process to map the view. The return value to MapViewOfFileEx is the same value as lpvBase if the function is successful; otherwise, it is NULL. Similarly, for MapViewOfFile the return value is NULL if the function fails.

Multiple views of the same file-mapping object can coexist and overlap each other as shown in Figure 5.

Figure 5. Memory-mapped file objects permit multiple, overlapped views of the file from one or more processes at the same time.

Notice that multiple views of a memory-mapped file can overlap, regardless of what process maps them. In a single process with overlapping views, you simply end up with two or more virtual addresses in a process that refer to the same location in physical memory. So, it's possible to have several PTEs referencing the same page frame. Remember, each page of a shared memory-mapped file is represented by only one physical page of memory. To view that page of memory, a process needs a page directory entry and page-table entry to reference the page frame.

There are two ways in which needing only one physical page of memory for a shared page benefits applications in the system. First, there is an obvious savings of resources because both processes share both the physical page of memory and the page of hard disk storage used to back the memory-mapped file. Second, there is only one set of data, so all views are always coherent with one another. This means that changes made to a page in the memory-mapped file via one process's view are automatically reflected in a common view of the memory-mapped file in another process. Essentially, Windows NT is not required to do any special bookkeeping to ensure the integrity of data to both applications.

Unmapping a View of a Memory-Mapped File

Once a view of the memory-mapped file has been mapped, the view can be unmapped at any time by calling the UnmapViewOfFile function. As you can see below, there is nothing tricky about this function. Simply supply the one parameter that indicates the base address, where the view of the file begins in your process

/* Load tokens for APIS section. */
LoadString (hDLL, IDS_PORTAPIS, szSection, MAX_PATH);
if (!LoadSection (szIniFilePath, 
                  szSection, 
                  PT_APIS, 
                  &nOffset, 
                  lpMMFile))
        {
        /* Clean up memory-mapped file. */
        UnmapViewOfFile (lpMMFile);
        CloseHandle (hMMFile);
        return FALSE;
        }

As mentioned above, you can have multiple views of the same memory-mapped file, and they can overlap. But what about mapping two identical views of the same memory-mapped file? After learning how to unmap a view of a file, you could come to the conclusion that it would not be possible to have two identical views in a single process because their base address would be the same, and you wouldn't be able to distinguish between them. This is not true. Remember that the base address returned by either the MapViewOfFile or the MapViewOfFileEx function is not the base address of the file view. Rather, it is the base address in your process where the view begins. So mapping two identical views of the same memory-mapped file will produce two views having different base addresses, but nonetheless identical views of the same portion of the memory-mapped file.

The point of this little exercise is to emphasize that every view of a single memory-mapped file object is always mapped to a unique range of addresses in the process. The base address will be different for each view. For that reason the base address of a mapped view is all that is required to unmap the view.

Flushing Views of Files

An important feature for memory-mapped files is the ability to write any changes to disk immediately if necessary. This feature is provided through the FlushViewOfFile function. Changes made to a memory-mapped file through a view of the file, other than the system pagefile, are automatically written to disk when the view is unmapped or when the file-mapping object is deleted. Yet, if an application needs to force the changes to be written immediately, FlushViewOfFile can be used for that purpose.

  /* Force changes to disk immediately. */
FlushViewOfFile (lpMMFile, nMMFileSize);

The example listed above flushes an entire file view to disk. In doing so, the system only writes the dirty pages to disk. Since the Windows NT virtual-memory manager automatically tracks changes made to pages, it is a simple matter for it to enumerate all dirty pages in a range of addresses, writing them to disk. The range of addresses is formed by taking the base address of the file view supplied by the first parameter to the FlushViewOfFile function as the starting point and extending to the size supplied by the second parameter, cbFlush. The only requirement is that the range be within the bounds of a single file view.

Releasing a Memory-Mapped File

Like most other objects in the Win32 subsystem, a memory-mapped file object is closed by calling the CloseHandle function. It is not necessary to unmap all views of the memory-mapped file before closing the object. As mentioned above, dirty pages are written to disk before the object is freed. To close a memory-mapped file, call the CloseHandle function, which supplies the memory-mapped file object handle for the function parameter.

/* Close memory-mapped file. */
CloseHandle (hMMFile);

It is worth noting that closing a memory-mapped file does nothing more than free the object. If the memory-mapped file represents a file on disk, the file must still be closed using standard file I/O functions. Also, if you create a temporary file explicitly for use as a memory-mapped file as in the initial ProcessWalker example, you are responsible for removing the temporary file yourself. To illustrate what the entire cleanup process may look like, consider the following example from the ProcessWalker sample application.

case IDM_MMFFREE:
case IDM_MMFFREENEW:
case IDM_MMFFREEEXIST:
    {
    HCURSOR    hOldCursor;
    OFSTRUCT   of;

    /* Put hourglass cursor up. */
    hOldCursor = (HCURSOR)SetClassLong (hWnd, GCL_HCURSOR, 0);
    SetCursor (LoadCursor (0, IDC_WAIT));

    /* Release memory-mapped file and associated file if any. */
    CloseHandle (MMFiles[wParam-IDM_MMFFREE].hMMFile);
    MMFiles[wParam-IDM_MMFFREE].hMMFile = NULL;

    if (MMFiles[wParam-IDM_MMFFREE].hFile)
        {
        CloseHandle (MMFiles[wParam-IDM_MMFFREE].hFile);
        MMFiles[wParam-IDM_MMFFREE].hFile = NULL;
        }

    /* If temporary file, delete here. */
    if (wParam == IDM_MMFFREENEW)
        {
        OpenFile (MMFiles[wParam-IDM_MMFFREE].szMMFile, 
                  &of, 
                  OF_DELETE);
        *(MMFiles[wParam-IDM_MMFFREE].szMMFile) = 0;
        }

    /* Replace wait cursor with old cursor. */
    SetClassLong (hWnd, GCL_HCURSOR, (LONG)hOldCursor);
    SetCursor (hOldCursor);
    }
    break;

In this example, the memory-mapped file can be one of three types: the system pagefile, a temporary file, or an existing file on disk. If the file is the system pagefile, the memory-mapped file object is simply closed, and no additional cleanup is necessary. If the memory-mapped file is mapped from an existing file, that file is closed right after closing the memory-mapped file. If the memory-mapped file is a mapping of a temporary file, it is no longer needed and is deleted using standard file I/O immediately after closing the temporary file handle, which cannot occur until after closing the memory-mapped file object handle.

Conclusion

Memory-mapped files provide unique methods for managing memory in the Win32 application programming interface. They permit an application to map its virtual address space directly to a file on disk. Once a file has been memory-mapped, accessing its content is reduced to dereferencing a pointer.

A memory-mapped file can also be mapped by more than one application simultaneously. This represents the only mechanism for two or more processes to directly share data in Windows NT. With memory-mapped files, processes can map a common file or portion of a file to unique locations in their own address space. This technique preserves the integrity of private address spaces for all processes in Windows NT.

Memory-mapped files are also useful for manipulating large files. Since creating a memory mapping file consumes few physical resources, extremely large files can be opened by a process and have little impact on the system. Then, smaller portions of the file called "views" can be mapped into the process's address space just before performing I/O.

There are many techniques for managing memory in applications for Win32. Whether you need the benefits of memory sharing or simply wish to manage virtual memory backed by a file on disk, memory-mapped file functions offer the support you need.

+ Recent posts